Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement testing "framework" #104

Open
akerbos opened this issue Feb 2, 2017 · 1 comment
Open

Implement testing "framework" #104

akerbos opened this issue Feb 2, 2017 · 1 comment

Comments

@akerbos
Copy link
Collaborator

akerbos commented Feb 2, 2017

We need some form of automated test.

Create a file with commands and expected result (i.e. log messages). Check which log sections are there (i.e. which engines/extensions ran) and how many (which?) messages of which type are included.

Depends on #103

@reitzig
Copy link
Owner

reitzig commented Feb 24, 2018

Thoughts:

  • Specify tests in a structured way. (Ruby? JSON with schema?)
    - which test document (if any)
    - which parameters (potentially multiple sets?)
    - the expected behaviour: which engines and extensions are run how often and when?
    - the expected output: number of errors and warnings; optionally their content
  • Run (subsets of) the tests.
    - Copy test document to separate directory.
    - Run multiple tests in parallel.
    - Compare with expectations
  • Aggregate results and output a report (options for machine and human readability).

Bonus: Can we script/wrap the resulting test bank so that it can be automatically run by some CI tool? That would be helpful evaluate pull requests (if we should get any).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants