Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add extension points to the test runner to support user defined behaviour #2225

Open
TheFriendlyCoder opened this issue May 11, 2024 · 3 comments

Comments

@TheFriendlyCoder
Copy link

Coming from a Python background where I have made extensive use of the pytest library, I became accustomed to the level of customizability the library offered. One of the most powerful / useful features of it are it's extensible fixtures. These allow users to heavily customize the mechanics of the test runner via a set of hooks / extension points, allowing custom behaviour to be "injected" into various parts of the runner sequencing. Some of the most notable features are:

These are just a few examples of the kinds of extension points that I haven't been able to find comparables for in the Dart testing framework. I did notice that there were some plans to add support for plugins to the test runner, but I'm not sure the tasks I found there offer nearly the same level of customization that other test frameworks like PyTest offer. In particular, the ones that allow for scoping of test fixtures to different levels, and hooking into specific stages with the test orchestration pipeline. Such features greatly enhance the ability of developers to customize the behaviour for unique workflows.

Below are some workflows that I've used personally in other test suites that would benefit greatly from having this flexibility:

  • being able to aggregate log output either per-test, per-module or per suite/subset of tests
  • being able to orchestrate shared state that is only required within certain scopes (ie: shared credentials for a group of tests, shared resources like Docker environments, etc.)
  • being able to optimize the use of expensive resources depending on test state (ie: consider a test suite with 100 tests, 90 of which are light weight tests and use mocks, and 10 of which use heavy resources like docker container. Being able to reserve the creation / destruction of the expensive resources ONLY when tests that require them ARE SELECTED for execution in the current run)

I know this enhancement request is intentionally open ended, but I thought it might be good to create a starting point for discussion on various options for adding extension points like what I've just described and then maybe branch it off to more specific line items depending on which ones are most feasible.

@TheFriendlyCoder
Copy link
Author

Relates, in part, to my request here as well.

@TheFriendlyCoder
Copy link
Author

TheFriendlyCoder commented May 11, 2024

In case it helps focus the discussion, feel free to use this PyTest config from one of my open source projects as an example. Some of the most notable features here are:

  • a fixture that launches a Docker container running the Jenkins service, but only when at least 1 test from the suite that require the service are selected for the current run, and only starting this container once and then sharing it's connection parameters with all other tests that need it.
  • allowing the docker container to be preserved between test runs so you don't incur the expensive startup cost of Docker every time, which is customizable via a simple command line switch
  • the ability to dynamically disable specific tests at runtime based on a user defined / custom command line switch (ie: to easily skip any test that makes use of said Docker container)
  • the ability to add a shim between the test fixture and a test library called vcrpy (similar to DartVCR) which allows us to be smart enough to auto-start a live service using Docker if a new test has been added which hasn't yet had it's HTTP response data recorded for future use, and skip this expensive step and use pre-recorded data for subsequent calls without any additional intervention by the developer

@TheFriendlyCoder
Copy link
Author

It is also worth noting that, for each of these hooks and extension points PyTest passes relevant contextual information to each callback so you can make informed decisions based on the current / active state of the runner. Like, being able to see not only a complete list of all tests in the suite, but also which tests are enabled/disabled, which are selected / filtered for the current run, etc. This information is invaluable to being able to take full advantage of the various hooks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant