Skip to content

Commit

Permalink
feat: Targeted testing support
Browse files Browse the repository at this point in the history
  • Loading branch information
Stranger6667 committed Apr 23, 2020
1 parent ce20655 commit cce1e95
Show file tree
Hide file tree
Showing 12 changed files with 167 additions and 5 deletions.
2 changes: 2 additions & 0 deletions docs/changelog.rst
Expand Up @@ -12,6 +12,7 @@ Added
- Storing network logs with ``--store-network-log=<filename.yaml>``.
The stored cassettes are based on the `VCR format <https://relishapp.com/vcr/vcr/v/5-1-0/docs/cassettes/cassette-format>`_
and contain extra information from the Schemathesis internals. `#379`_
- Targeted property-based testing in CLI and runner. It only supports `response_time` target at the moment. `#104`_

Fixed
~~~~~
Expand Down Expand Up @@ -1055,6 +1056,7 @@ Fixed
.. _#109: https://github.com/kiwicom/schemathesis/issues/109
.. _#107: https://github.com/kiwicom/schemathesis/issues/107
.. _#106: https://github.com/kiwicom/schemathesis/issues/106
.. _#104: https://github.com/kiwicom/schemathesis/issues/104
.. _#101: https://github.com/kiwicom/schemathesis/issues/101
.. _#99: https://github.com/kiwicom/schemathesis/issues/99
.. _#98: https://github.com/kiwicom/schemathesis/issues/98
Expand Down
1 change: 1 addition & 0 deletions docs/index.rst
Expand Up @@ -14,6 +14,7 @@ Welcome to schemathesis's documentation!
:caption: Contents:

usage
targeted
faq
changelog

Expand Down
76 changes: 76 additions & 0 deletions docs/targeted.rst
@@ -0,0 +1,76 @@
.. _targeted:

Targeted property-based testing
===============================

Schemathesis supports targeted property-based testing via utilizing ``hypothesis.target`` inside its runner and provides
an API to guide data generation towards certain pre-defined goals:

- ``response_time``. Hypothesis will try to generate input that will more likely to have higher response time;

To illustrate this feature, consider the following AioHTTP endpoint, that contains a hidden performance problem -
the more zeroes are in the input number the slower it works and if there are more than 10 zeroes it will cause an internal
server error:

.. code:: python
async def performance(request: web.Request) -> web.Response:
decoded = await request.json()
number = str(decoded).count("0")
if number > 0:
# emulate hard work
await asyncio.sleep(0.01 * number)
if number > 10:
raise web.HTTPInternalServerError
return web.json_response({"slow": True})
Let's take a look if Schemathesis can discover this issue and how much time it will take:

.. code:: bash
$ schemathesis run --hypothesis-max-examples=100000 http://127.0.0.1:8081/swagger.yaml
...
1. Received a response with 5xx status code: 500
Check : not_a_server_error
Body : 58150920460703009030426716484679203200
Run this Python code to reproduce this failure:
requests.post('http://127.0.0.1:8081/api/performance', json=58150920460703009030426716484679203200)
Or add this option to your command line parameters: --hypothesis-seed=240368931405400688094965957483327791742
================================================== SUMMARY ==================================================
Performed checks:
not_a_server_error 67993 / 68041 passed FAILED
============================================ 1 failed in 662.16s ===========================================
And with targeted testing (``.hypothesis`` directory was removed between these test runs to avoid reusing results):

.. code:: bash
$ schemathesis run --target=response_time --hypothesis-max-examples=100000 http://127.0.0.1:8081/swagger.yaml
...
1. Received a response with 5xx status code: 500
Check : not_a_server_error
Body : 2600050604444474172950385505254500000
Run this Python code to reproduce this failure:
requests.post('http://127.0.0.1:8081/api/performance', json=2600050604444474172950385505254500000)
Or add this option to your command line parameters: --hypothesis-seed=340229547842147149729957578683815058325
================================================== SUMMARY ==================================================
Performed checks:
not_a_server_error 22039 / 22254 passed FAILED
============================================ 1 failed in 305.50s ===========================================
This behavior is reproducible in general, but not guaranteed due to the randomness of data generation. However, it shows
a significant testing time reduction especially on a big number of examples.

Hypothesis `documentation <https://hypothesis.readthedocs.io/en/latest/details.html#targeted-example-generation>`_ provides a detailed explanation of what targeted property-based testing is.
13 changes: 13 additions & 0 deletions src/schemathesis/cli/__init__.py
Expand Up @@ -9,6 +9,7 @@
from .. import checks as checks_module
from .. import models, runner
from ..runner import events
from ..runner.targeted import DEFAULT_TARGETS_NAMES, Target
from ..types import Filter
from ..utils import WSGIResponse
from . import callbacks, cassettes, output
Expand Down Expand Up @@ -45,6 +46,15 @@ def schemathesis(pre_run: Optional[str] = None) -> None:
@click.option(
"--checks", "-c", multiple=True, help="List of checks to run.", type=CHECKS_TYPE, default=DEFAULT_CHECKS_NAMES
)
@click.option(
"--target",
"-t",
"targets",
multiple=True,
help="Targets for input generation.",
type=click.Choice([target.name for target in Target]),
default=DEFAULT_TARGETS_NAMES,
)
@click.option(
"-x", "--exitfirst", "exit_first", is_flag=True, default=False, help="Exit instantly on first error or failed test."
)
Expand Down Expand Up @@ -152,6 +162,7 @@ def run( # pylint: disable=too-many-arguments
auth_type: str,
headers: Dict[str, str],
checks: Iterable[str] = DEFAULT_CHECKS_NAMES,
targets: Iterable[str] = DEFAULT_TARGETS_NAMES,
exit_first: bool = False,
endpoints: Optional[Filter] = None,
methods: Optional[Filter] = None,
Expand All @@ -177,6 +188,7 @@ def run( # pylint: disable=too-many-arguments
SCHEMA must be a valid URL or file path pointing to an Open API / Swagger specification.
"""
# pylint: disable=too-many-locals
selected_targets = tuple(target for target in Target if target.name in targets)

if "all" in checks:
selected_checks = checks_module.ALL_CHECKS
Expand All @@ -198,6 +210,7 @@ def run( # pylint: disable=too-many-arguments
exit_first=exit_first,
store_interactions=store_network_log is not None,
checks=selected_checks,
targets=selected_targets,
workers_num=workers_num,
validate_schema=validate_schema,
hypothesis_deadline=hypothesis_deadline,
Expand Down
8 changes: 8 additions & 0 deletions src/schemathesis/runner/__init__.py
Expand Up @@ -11,13 +11,15 @@
from ..utils import dict_not_none_values, dict_true_values, file_exists, get_base_url, get_requests_auth, import_app
from . import events
from .impl import BaseRunner, SingleThreadRunner, SingleThreadWSGIRunner, ThreadPoolRunner, ThreadPoolWSGIRunner
from .targeted import DEFAULT_TARGETS, Target


def prepare( # pylint: disable=too-many-arguments
schema_uri: Union[str, Dict[str, Any]],
*,
# Runtime behavior
checks: Iterable[CheckFunction] = DEFAULT_CHECKS,
targets: Iterable[Target] = DEFAULT_TARGETS,
workers_num: int = 1,
seed: Optional[int] = None,
exit_first: bool = False,
Expand Down Expand Up @@ -70,6 +72,7 @@ def prepare( # pylint: disable=too-many-arguments
app=app,
validate_schema=validate_schema,
checks=checks,
targets=targets,
hypothesis_options=hypothesis_options,
seed=seed,
workers_num=workers_num,
Expand Down Expand Up @@ -112,6 +115,7 @@ def execute_from_schema(
app: Optional[str] = None,
validate_schema: bool = True,
checks: Iterable[CheckFunction],
targets: Iterable[Target],
workers_num: int = 1,
hypothesis_options: Dict[str, Any],
auth: Optional[RawAuth] = None,
Expand Down Expand Up @@ -150,6 +154,7 @@ def execute_from_schema(
runner = ThreadPoolWSGIRunner(
schema=schema,
checks=checks,
targets=targets,
hypothesis_settings=hypothesis_options,
auth=auth,
auth_type=auth_type,
Expand All @@ -163,6 +168,7 @@ def execute_from_schema(
runner = ThreadPoolRunner(
schema=schema,
checks=checks,
targets=targets,
hypothesis_settings=hypothesis_options,
auth=auth,
auth_type=auth_type,
Expand All @@ -177,6 +183,7 @@ def execute_from_schema(
runner = SingleThreadWSGIRunner(
schema=schema,
checks=checks,
targets=targets,
hypothesis_settings=hypothesis_options,
auth=auth,
auth_type=auth_type,
Expand All @@ -189,6 +196,7 @@ def execute_from_schema(
runner = SingleThreadRunner(
schema=schema,
checks=checks,
targets=targets,
hypothesis_settings=hypothesis_options,
auth=auth,
auth_type=auth_type,
Expand Down
15 changes: 14 additions & 1 deletion src/schemathesis/runner/impl/core.py
Expand Up @@ -16,6 +16,7 @@
from ...schemas import BaseSchema
from ...types import RawAuth
from ...utils import GenericResponse, capture_hypothesis_output
from ..targeted import Target

DEFAULT_DEADLINE = 500 # pragma: no mutate

Expand All @@ -31,6 +32,7 @@ def get_hypothesis_settings(hypothesis_options: Dict[str, Any]) -> hypothesis.se
class BaseRunner:
schema: BaseSchema = attr.ib() # pragma: no mutate
checks: Iterable[CheckFunction] = attr.ib() # pragma: no mutate
targets: Iterable[Target] = attr.ib() # pragma: no mutate
hypothesis_settings: hypothesis.settings = attr.ib(converter=get_hypothesis_settings) # pragma: no mutate
auth: Optional[RawAuth] = attr.ib(default=None) # pragma: no mutate
auth_type: Optional[str] = attr.ib(default=None) # pragma: no mutate
Expand Down Expand Up @@ -66,6 +68,7 @@ def run_test(
endpoint: Endpoint,
test: Union[Callable, InvalidSchema],
checks: Iterable[CheckFunction],
targets: Iterable[Target],
results: TestResultSet,
**kwargs: Any,
) -> Generator[events.ExecutionEvent, None, None]:
Expand All @@ -80,7 +83,7 @@ def run_test(
result.add_error(test)
else:
with capture_hypothesis_output() as hypothesis_output:
test(checks, result, **kwargs)
test(checks, targets, result, **kwargs)
status = Status.success
except (AssertionError, hypothesis.errors.MultipleFailures):
status = Status.failure
Expand Down Expand Up @@ -133,9 +136,16 @@ def run_checks(case: Case, checks: Iterable[CheckFunction], result: TestResult,
raise get_grouped_exception(*errors)


def run_targets(targets: Iterable[Target], elapsed: float) -> None:
for target in targets:
if target == Target.response_time:
hypothesis.target(elapsed, label="response_time")


def network_test(
case: Case,
checks: Iterable[CheckFunction],
targets: Iterable[Target],
result: TestResult,
session: requests.Session,
request_timeout: Optional[int],
Expand All @@ -145,6 +155,7 @@ def network_test(
# pylint: disable=too-many-arguments
timeout = prepare_timeout(request_timeout)
response = case.call(session=session, timeout=timeout)
run_targets(targets, response.elapsed.total_seconds())
if store_interactions:
result.store_requests_response(response)
run_checks(case, checks, result, response)
Expand Down Expand Up @@ -174,6 +185,7 @@ def prepare_timeout(timeout: Optional[int]) -> Optional[float]:
def wsgi_test(
case: Case,
checks: Iterable[CheckFunction],
targets: Iterable[Target],
result: TestResult,
auth: Optional[RawAuth],
auth_type: Optional[str],
Expand All @@ -186,6 +198,7 @@ def wsgi_test(
start = time.monotonic()
response = case.call_wsgi(headers=headers)
elapsed = time.monotonic() - start
run_targets(targets, elapsed)
if store_interactions:
result.store_wsgi_response(case, response, headers, elapsed)
result.logs.extend(recorded.records)
Expand Down
2 changes: 2 additions & 0 deletions src/schemathesis/runner/impl/solo.py
Expand Up @@ -21,6 +21,7 @@ def _execute(self, results: TestResultSet) -> Generator[events.ExecutionEvent, N
endpoint,
test,
self.checks,
self.targets,
results,
session=session,
request_timeout=self.request_timeout,
Expand All @@ -39,6 +40,7 @@ def _execute(self, results: TestResultSet) -> Generator[events.ExecutionEvent, N
endpoint,
test,
self.checks,
self.targets,
results,
auth=self.auth,
auth_type=self.auth_type,
Expand Down
14 changes: 11 additions & 3 deletions src/schemathesis/runner/impl/threadpool.py
Expand Up @@ -12,6 +12,7 @@
from ...types import RawAuth
from ...utils import capture_hypothesis_output, get_requests_auth
from .. import events
from ..targeted import Target
from .core import BaseRunner, get_session, network_test, run_test, wsgi_test


Expand All @@ -20,6 +21,7 @@ def _run_task(
tasks_queue: Queue,
events_queue: Queue,
checks: Iterable[CheckFunction],
targets: Iterable[Target],
settings: hypothesis.settings,
seed: Optional[int],
results: TestResultSet,
Expand All @@ -30,14 +32,15 @@ def _run_task(
while not tasks_queue.empty():
endpoint = tasks_queue.get()
test = make_test_or_exception(endpoint, test_template, settings, seed)
for event in run_test(endpoint, test, checks, results, **kwargs):
for event in run_test(endpoint, test, checks, targets, results, **kwargs):
events_queue.put(event)


def thread_task(
tasks_queue: Queue,
events_queue: Queue,
checks: Iterable[CheckFunction],
targets: Iterable[Target],
settings: hypothesis.settings,
auth: Optional[RawAuth],
auth_type: Optional[str],
Expand All @@ -53,20 +56,23 @@ def thread_task(
# pylint: disable=too-many-arguments
prepared_auth = get_requests_auth(auth, auth_type)
with get_session(prepared_auth, headers) as session:
_run_task(network_test, tasks_queue, events_queue, checks, settings, seed, results, session=session, **kwargs)
_run_task(
network_test, tasks_queue, events_queue, checks, targets, settings, seed, results, session=session, **kwargs
)


def wsgi_thread_task(
tasks_queue: Queue,
events_queue: Queue,
checks: Iterable[CheckFunction],
targets: Iterable[Target],
settings: hypothesis.settings,
seed: Optional[int],
results: TestResultSet,
kwargs: Any,
) -> None:
# pylint: disable=too-many-arguments
_run_task(wsgi_test, tasks_queue, events_queue, checks, settings, seed, results, **kwargs)
_run_task(wsgi_test, tasks_queue, events_queue, checks, targets, settings, seed, results, **kwargs)


def stop_worker(thread_id: int) -> None:
Expand Down Expand Up @@ -142,6 +148,7 @@ def _get_worker_kwargs(self, tasks_queue: Queue, events_queue: Queue, results: T
"tasks_queue": tasks_queue,
"events_queue": events_queue,
"checks": self.checks,
"targets": self.targets,
"settings": self.hypothesis_settings,
"auth": self.auth,
"auth_type": self.auth_type,
Expand All @@ -161,6 +168,7 @@ def _get_worker_kwargs(self, tasks_queue: Queue, events_queue: Queue, results: T
"tasks_queue": tasks_queue,
"events_queue": events_queue,
"checks": self.checks,
"targets": self.targets,
"settings": self.hypothesis_settings,
"seed": self.seed,
"results": results,
Expand Down
9 changes: 9 additions & 0 deletions src/schemathesis/runner/targeted.py
@@ -0,0 +1,9 @@
from enum import Enum, unique

DEFAULT_TARGETS = ()
DEFAULT_TARGETS_NAMES = ()


@unique
class Target(Enum):
response_time = 1

0 comments on commit cce1e95

Please sign in to comment.