Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build a testing module #417

Open
masak opened this issue Oct 30, 2018 · 11 comments
Open

Build a testing module #417

masak opened this issue Oct 30, 2018 · 11 comments

Comments

@masak
Copy link
Owner

masak commented Oct 30, 2018

Just like a module file can have a MAIN function, it can also have one or more functions marked with an is test trait (or #257 a @test annotation). Presumably, calling bin/007 myscript --test will run these functions. Presumably x 2, a usage message will include the --test flag if there are @test annotations.

  • I'm thinking the tests'll run after the mainline has run just like MAIN does. The general case will be that the mainline doesn't do anything, except for maybe initialize values.
  • The functions run in a random order.
  • Tests can complete successfully (green), complete with an assertion error (red), or throw some other exception (um, purple? light gray?)
  • Only top-level functions are allowed to have the @test annotation. If a nested function has it, that's a (beautiful) compile-time error.
  • It seems to me there are many different types of annotations. The @test annotation seems to me to be a (Contextual macros #349) contextual macro hosted by the compunit, which can create an array of references to @test-annotated functions, that can then be run after the mainline. (Conceptually, through a LEAVE phaser on the mainline or something.)

Importing the test module

Recommended that you import the things you need from the test module:

 import { test, assert, threw } from test;

or

import * from test;

You could also, if you want, import only the module:

import test;

And then use the qualified names, like @test.test and test.assert.

Assertions

assert is a prefix macro. It expects an expression, a predicate that we want to be truthy for the test to pass.

assert cart.totalAmount() == 42;

(Edit: But see the comment further down, which argues for spelling it expect instead.)

  • The @test annotation will make sure that the function has at least one leftmost assert somewhere. It's OK for them to be nested below the top block of the function (for example, in for loops).
  • It's possible to have several assertions in a @test function. They are short-circuiting, like the boolean operators, in the sense that a failed assert aborts later execution in the function.

The strength of assert (and a very nice use case for macros) is that when an assertion fails, enough information has been saved to give really specific diagnostics:

## Failed test: 'total amount reflects things in cart'

Assertion failed in lib/cart.007, line 55:

|    assert cart.totalAmount() == 42;
|           ^^^^^^^^^^^^^^^^^^ ^^
|                   |           |
|                   |           +----- False
|                   |
|                   +----------------- 0

Without going into detail, the assert macro has done the job of saving line-and-file information, the whole code in the assertion (looking at you, Python), as well as intercepting any non-constant subexpressions — it intercepts cart.totalAmount(), not 42 — saving those values away somewhere for the diagnostics.

(Very conjectural, but: in some cases it might be possible to data-flow back through the relevant function/method calls, at compile time, and install the appropriate "spies" in that code as well, to provide "reason" diagnostics, the kind of things you'd be wont to look up when you saw the test failure. A decent balance would need to be struck between being informative but not too verbose.)

Asserting an exception got thrown

The threw prefix is meant to be placed in an assert predicate immediately after a line of business code we expect to fail with a particular exception type.

cart.removeLineItem(productId: 55);    # we expect it to fail
assert threw X.ProductNotFound;

A couple of comments:

  • Yes, if the computation fails, the code with the assert threw is "dead"... but that's fine, because the assert has made sure to
    • (a) create a CATCH block for itself — if there is one already, give a pleasing compiler error — where it checks the type of the exception, and
    • (b) the threw prefix actually returns False, because if it ever runs, then we didn't actually throw something, and
    • (c) assert knows to give a good diagnostic in this case, even including the preceding statement that should've thrown the exception.
    • (d) We might even give a custom diagnostic if the wrong exception is thrown; I'm of two minds about that. Is it better to only let the normal exception-in-a-test diagnostic handle that? I don't know. (Later edit: let the normal exception-in-a-test diagnostic handle it, but give it runtime information that it can include to indicate which exception should have been thrown.)

If we want to capture the exception and introspect it (for example checking the exact message, or other fields), use the with statement (#156):

cart.removeLineItem(productId: 55);    // we expect it to fail
with threw X.ProductNotFound -> e {
    assert e.message == "Product with id 55 was not found";
}

The assert keyword is implied in this case; using it is fine, and may count as extra documentation, but just using with is enough.

Other annotations

I can see the use case immediately for a @beforeEach annotation, and an @afterEach one. They are "phaser-like", in that they are instrumented to run around the tests.

Maybe even a @beforeAll and @afterAll could be useful — their main benefit would be that they only run in the case of running the module during testing.

I'm not at all ready to do any kind of nested testing. I think that's extra complexity without any benefit.

I was gonna say we might want to have an annotation for parameterized tests, but... I'm not actually convinced that'd be a win compared to just having a for loop inside the test function.

However, I do see a lovely use for a @mock annotation on (typed #33) parameters. That's far into the future, but... nice way not to have to initialize the mock yourself, and still have it mock the right dependency. Of course, you're likely to want to initialize the SUT in @beforeEach or @beforeAll, so mocked parameters should work there too.

I'm not aware of a reason to make the test module work with classes instead of functions, although such a reason might exist.

@masak
Copy link
Owner Author

masak commented Nov 10, 2018

And then use the qualified names, like @test.test and test.assert.

Nope on the latter; thinko. The "real name" of the prefix thus imported would be test.prefix:assert (and you're free to call it like a function), but there is no qualified prefix symbol.

...fortunately. I was not looking forward to implementing that extra bit of parser complexity. 😂

@masak
Copy link
Owner Author

masak commented Dec 1, 2018

I just realized that the assertType of #256 (which I'm implementing in 007 as we speak) is really a special case of assert. That is, instead of

assertType(n, Int);

one could write

assert 5 ~~ Int;

It (a) satisfies the kind of orthogonality we're looking for in our primitives, and (b) suggests that perhaps asserting is not limited to testing. I like the sound of both of those ideas.

I don't mind making assert a built-in. It would still be able to give excellent error messages; just not as part of a test failure in this case. Whenever an assert is used inside of a test, it "plugs into" the test reporter. Oh, and as a built-in, it'd be one less thing to import all the time into the tests.

@masak
Copy link
Owner Author

masak commented Dec 1, 2018

As I was reading how to design co-programs, I was thinking maybe there should be a @useCase annotation or some such, iterating through all the possible types of input and output of a function or method. Kind of like a minimal test that you'd write anyway. The idea being that the unit tests themselves could then focus on "higher" matters; how things combine and interact.

Maybe such an annotation could even be included into API documentation somehow. (And it should be designed with that in mind, with maybe an string parameter describing the use case.)

Also, maybe a linter tool of some kind could easily detect if some input or output case was missing, and flag that up as a missing @useCase annotation.

masak pushed a commit that referenced this issue Dec 1, 2018
According to a throwaway comment in #256.

This built-in might not be long-lived, the comment at
#417 (comment)
suggests it might be better to fold this one into `assert`
and make the latter a built-in.

Until then, consider `assertType` a stopgap.
@masak
Copy link
Owner Author

masak commented Dec 14, 2018

The strength of assert (and a very nice use case for macros) is that when an assertion fails, enough information has been saved to give really specific diagnostics

The way Ava reports test failures is very close to what I had in mind.

@masak
Copy link
Owner Author

masak commented Mar 16, 2019

  • Only top-level functions are allowed to have the @test annotation. If a nested function has it, that's a (beautiful) compile-time error.

This is in fact necessary because the @tested functions need to have been evaluated into actual (non-static) function values by the time we call them. There might be fully legitimate cases of variable closure of mainline variables.

I'm thinking say()ing things in the mainline should be allowed, but frowned upon. Maybe they should show up in a dark grey or something. Maybe linters should point to them as being not-recommended in the face of @test annotations.

@masak
Copy link
Owner Author

masak commented Apr 18, 2019

I think as a pleasant side effect of @test being an annotation and statically analyzable, we'll be able to tell statically where the tests are and how many of them there are. (For prove-like test runners.) Quite a nice improvement on use Test::More tests => 23; and done-testing;.

@masak
Copy link
Owner Author

masak commented Apr 23, 2019

assert is a prefix macro.

No, it isn't, not with our tight prefixes at least.

Same as #300 (comment), it could be a term.

@masak
Copy link
Owner Author

masak commented Apr 25, 2019

I know I've used assert throughout this issue, under the assumption that it's cute to make assert condition mean one thing outside of tests (if !condition { stopWithAnError() }) and another thing inside of tests (if !condition { throw X.AssertFailed() }).

I no longer believe those two concepts should be mixed into a single keyword. I'd prefer to use expect for the thing within the tests.

@masak
Copy link
Owner Author

masak commented May 16, 2019

...and we should probably not have the assert keyword at all, but instead PRE blocks like in #15.

@masak
Copy link
Owner Author

masak commented Jul 14, 2022

I know I've used assert throughout this issue, under the assumption that it's cute to make assert condition mean one thing outside of tests (if !condition { stopWithAnError() }) and another thing inside of tests (if !condition { throw X.AssertFailed() }).

I no longer believe those two concepts should be mixed into a single keyword. I'd prefer to use expect for the thing within the tests.

Counterargument: D uses assert in tests.

@masak
Copy link
Owner Author

masak commented Jul 28, 2023

(Very conjectural, but: in some cases it might be possible to data-flow back through the relevant function/method calls, at compile time, and install the appropriate "spies" in that code as well, to provide "reason" diagnostics, the kind of things you'd be wont to look up when you saw the test failure. A decent balance would need to be struck between being informative but not too verbose.)

It strikes me that the @test annotation itself would need to provide a context (#349) for the assert macro, in order to be able to both inspect the code outside of its own argument, and to insert those spies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant