Skip to content

OLD Possible Layers

Johannes Link edited this page Nov 16, 2015 · 1 revision

Contributed by kcooney

Here are some ideas on possible layers that we could build. Each of these layers could be an area of independent development.

Note that the features listed are not a definitive list; they are primarily there to help define what functionality could go in which layer.

Model

The model layer provides classes to represent a collection of tests.

This layer should be very generic, and could possibly represent not only JUnit3 and JUnit4 style tests, but Lambda-based tests and possibly tests from other test frameworks (like Fit and TestNG).

We may want to use build tools (Gradle, thenMaven) and IDEs for inspiration, since these systems often support more than one type of test.

Dependencies

None

Required Functionality

  • Identifying tests
    • Type of test (leaf node vs. parent)
    • Artifacts that represent the test (class, method, fit test file, etc)
    • Test attributes (annotations, possibly tags)
  • Filtering or selecting tests
    • Possibly implemented by modifying a tree (like in JUnit4) or providing a view
    • Select by name or annotations
    • Use case: run all tests in this package (and possibly sub-classes) annotated with @Fast or @Smoke
  • Running the tests
    • This may be a layer on top of the model
    • Include an API that allows operations to run in parallel
      • Some kind of executor-type API
      • Some kind of fork join API
    • Stopping a test run

Possible Functionality

  • Identifying possible points of parameterization
    • For example, if a test class or method is parameterized by some data provider, the total number of values may not be known upfront, but the IDE could show that it is parameterized and expect that it will be called multiple times
  • Sorting or ordering tests
    • It's unclear if all types of tests could support sorting or ordering
    • It's possible that sorting and ordering is a part of how the tests are run, not how the model is represented
    • It's possible that sorting and ordering is something specified by the class (or test resource) not the model (both in terms of specifying the ordering and implementing the reordering)
    • May be challenging to use the model API to specify a given ordering if the model represents tests written with different APIs (JUnit3 vs JUnit4 vs story files)
  • Modifying the model
    • Ex: running a test even though it annotated with @Ignore
  • Specifying dependencies
    • Ex: specifying that a test method/class requires some service to be already running
    • Ex: test dependencies
      • Yes, many of us want tests to be independent, but so many users seem to want dependent tests
      • Dependencies are a constraint on reordering, and possibly on filtering

Challenges

  • How do you represent a "test" and a "suite" in a way that is flexible?
  • How do allow users of the model API to go from source code to a node in the model, and vise-versa?
  • How do you map the model structure to the runtime structure?
    • In JUnit4 terms, map Description objects to Runner objects
    • Use case: selecting a single test method in an IDE and running it, without the IDE knowing what kind of test it is
  • Is filtering and sorting done on the model (and the runtime is told what tests to run) or done by the engines, and the engines update the model (the JUnit4 model)
    • The advantage of the JUnit4 model is that the Runner can encapsulate the logic of how to handle per-test or per-class setup and teardown in the face of filtering and sorting
    • The disadvantage of the JUnit4 model is the logic for filtering and sorting is split between the runner and the core code. In addition, most of the logic is shared via inheritance (in ParentRunner)
  • How to support multiple types of tests (JUnit3 vs JUnit4 vs lambda based)
    • ....without hard-coding the known types upfront?
  • Need to be able to serialize the model in a way that we can maintain going forward
  • Do we support running the same test multiple times
    • ex: You have one suite that runs all tests against an in-memory database, and another suite that creates a MySQL database on some test server and runs the tests. In JUnit4, you couldn't create a suite that includes both suites.

Discovery

The discovery layer has functionality for finding tests.

Dependencies

  • Model
  • Test Engine Registry

Notification

The notification layer allows code to view the progress of running tests on a model.

Dependencies

  • Model

Required Functionality

Notifications for

  • test run started/ending
  • leaf node started/ended
  • parent node started/ended
  • test skipped
  • test interrupted

Caller must be able to the Notification layer plus the Model layer to determine

  • tests scheduled (to render in UI)
  • tests passed/failed
  • exceptions (for failed tests)
  • parameters
  • test interrupted (i.e. started but didn't finish because test run was stopped)
  • test canceled (i.e. not run because test run was stopped)
  • timing

Possible Functionality

  • Notification of long-lived services starting up
  • Status of thread creation/destruction

Challenges

  • Based on our experience with JUnit, we might want two APIs: a listener API (to get the status of the test run) and a notification API (to publish status changes)

Test Engines

A test engine handles running a particular type of test (JUnit3, JUnit4, JUnit Lambda). It produces a portion of the Model, and provides an API for running tests.

Some of the functionality described in the Model layer might belong here instead, including sorting and filtering.

Dependencies

  • Model
  • Notification

Challenges

  • Making the API required to add a new engine simple but flexible
  • Way to register engines
  • For legacy reasons, the JUnit4 engine and the JUnit3 engine may need to know about each other
  • How to start experimenting with Lambda-based ways of writing tests without waiting for the Model and Notification APIs to be created?
    • One possible solution: do the first implementation using the Runner APIs

Ideas

  • Test out the API with a "clean room" implementation of test engine that runs JUnit4-style tests that are not annotated with @RunWith
  • Test out the API with an adapter that delegates to a Runner
Clone this wiki locally