Clone this wiki locally
Ideas for future changes to gabbi. Things listed here are not promises of what will happen. They are ideas for inspiring thought and discusion for what might happen. Some of these may be too specific to be ideas and might be better served as issues. It can be hard to decide.
It would be useful to explicitly and clearly define the public API (cf. https://github.com/cdent/gabbi/pull/102#issuecomment-147238170).
This would also be an opportunity to highlight the two entry points, programmatic API vs. CLI, in order to avoid confusion.
If there is confusion on this point it needs to also be addressed in the canonical docs.
(@FND has made a start at this in Architecture.)
It would be helpful to get a birds-eye overview of the different pieces and how they fit together (see ContentHandlers section below for some rough examples), along with a rationale for why things are the way they are.
Some of the stages:
- YAML parsing
- test [suite] generation
- test execution
- results reporting
From a conversation with @cdent:
The key engine to gabbi is the GabbiSuite. It differs from a standard TestSuite via two special bits of customization:
a) the TestCases within it as ordered and represent one http request
b) a suite can have a collection of fixtures that are setup and torn down at the start and end of the suite. fixtures are normally per test not suite.
So what gabbi does is read a yaml file and turn into a GabbiSuite containing HttpTestCases
gabbi-run script to
@FND argues that to users of
gabbi-run (who don't necessarily know/care about
the underlying Python implementation details), that command is gabbi - so
there's no reason to introduce an artificial distinction (which feels a little
like CLI invocations are second-class citizens)
@cdent is afraid such overloading (Python package vs. CLI command) will create ambiguity
note that such a change would not be backwards-compatible unless we keep a
gabbi-run alias (with deprecation warning)
Consider ways to bail out of a poll early
When using the
poll: functionality there may be cases where it would be worthwhile to bail out of the polling session early and fail the test. The use case here is where three separate outcomes might be happening on the polled resource: outright failure, status 200 but failure message in body, status 200 but success message in body. Turns out this is a thing in some aspects of OpenStack Heat's stack but there are ways around it by using the
events reporting API. However it seems likely the issue may come back up, so leaving notes here for future discussion.
One possible way to implement "bail out early" might be to do something like:
tests: - name: bail test url: /somewhere/something status: 200 poll: count: 10 delay: .5 abort: response_json_paths: $.foobar: FAILURE response_json_paths: $.foobar: SUCCESS
This would effectively run two response checks on the same result set. If the one within abort failed, then that's a sign that we should exit now and not loop. It's not clear if the implementation of this would be easy or hard, but the idea underlying the concept is that it ought to be possible to reuse the existing test evaluation (so we'd also be able to check status, response_strings, response_headers, etc in the abort section).
However this is really noisy and enables testing of APIs that shouldn't be like that in the first place so there would need to be quite a lot of compelling evidence for why this was a reasonable thing to do before even considering it seriously.
A version of content handlers has been introduced in Gabbi 1.26.0. It is backwards compatible with response handlers so no major version was required. See the docs.
There's been discussion on how to handle dealing with more than just JSON-based content types in response handlers and $RESPONSE style replaces. The currently pending idea is to work out an abstraction that encapsulates any behavior that is content-type dependent into a single module that is effectively activated when the the content-types it supports are currently in play (either in the current request or the one reference by
prior). The mechanism for that activation and how to encapsulate and abstract things is to be determined as some experimentation will be required to figure out the right solution.
This will augment but not replace ResponseHandlers as those operate in a slight different way. Whereas now there is JSONReponseHandler there will be something like a ContentTypeResponseHandler which engages with a JSONContentHandler to do the necessary work.
Obviously this idea has long way to go, but wanted to note that discussion is in progress.
ContentHandlers should affect:
response handlers (i.e.
gabbi-runcurrently provides a
gabbi_response_handlershook for registering custom
ResponseHandlerclasses. Note that this hook is not currently being used by the programmatic API.
In the future, we might want to unify the paths used by gabbi's programmatic API and
gabbi-runto construct and execute tests. In addition, we might want to change
RESPONSE_HANDLERSto a list of suffix-handler tuples (cf. replacers below) and provide an API for extensions to register custom handlers.
replacers (for substitution)
gabbi currently maintains a list of replacer names that are associated with corresponding
In the future, we might change that to a list of name-function tuples, with functions being passed both the current and the prior test. We might also want to provide an API for extensions to add their own replacers.
request bodies (cf. #96)
See gabbi-html for an example of custom response handlers and replacers.
It might be nice to separate the various layers of functionality to allow for custom composition (e.g. to simplify CLI reporting rather than shoehoerning testing frameworks into providing the desired output):
- parsing gabbi-specific YAML into
HTTPTests (ideally as a generator,
yielding one test at a time?)
- note that
unittest-style test discovery does not support iterative consumption
- it might be nice to declare extensions (e.g. response handlers) within each YAML file, avoiding extensions always residing in memory for all tests/suites even though they're not needed
- note that
- executing a test and performing its assertions
- this might require first executing any dependent tests
- reporting errors/failures
dynamic test generation
It would be interesting to be able to generate tests dynamically at runtime. For example, this should allow creating a crawler that recursively traverses a hypermedia API and checks responses' validity.
unittest's test discovery process might thwart such open-ended dynamics.