New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inter-test dependencies #48

Open
feuerbach opened this Issue Jan 29, 2014 · 32 comments

Comments

Projects
None yet
9 participants
@feuerbach
Owner

feuerbach commented Jan 29, 2014

@jstolarek writes:

I want to run test A first and if it succeeds I want to run test B (if A fails B is skipped).

@joeyh has also requested this feature in #47.

The main problem here is a syntax/API for declaring dependencies. It would be relatively easy for hspec-style monadic test suites, but I don't see a good way to do it using the current list-based interface.

(This could be a reason to move to the monadic interface. But it's a big change, and I'd like to explore other options first.)

@jstolarek

This comment has been minimized.

Show comment
Hide comment
@jstolarek

jstolarek Jan 29, 2014

Java does this using annotations (see here).

jstolarek commented Jan 29, 2014

Java does this using annotations (see here).

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Jan 29, 2014

Owner

I see. Well, we could also refer to other tests by their names, I guess.

The syntax then would be something like

tests = testGroup "Tests"
  [ testCase "bar" ...
  , ifSucceeds "bar" $ testCase "foo" ...
  ]

ifSucceeds should interpret its argument as a pattern, I think.

The monadic syntax would be, respectively,

tests = testGroup "Tests" $ do
  bar <- testCase "bar" ...
  ifSucceeds bar $ testCase "foo" ...
  • The string-based syntax is, of course, prone to typos and fragile due to test renamings
  • The monadic syntax requires the test and its dependencies to be in the same block. Even though it's possible to return test identifiers from do-blocks, it's not very convenient, especially when there's several of such identifiers
  • There's also some repetition in the monadic syntax: we have to give a name to both the test ("bar") and its identifier (bar)

Other thoughts?

Also, what should the semantics be in the case when the dependency is excluded (e.g. by a pattern), but the dependent test is included? I think the depending test should still run. The logic is that presumably if the dependency fails, the dependent test would certainly fail, too, but since we don't know whether the dependency failed or not, there's no reason not to try the dependent test.

Owner

feuerbach commented Jan 29, 2014

I see. Well, we could also refer to other tests by their names, I guess.

The syntax then would be something like

tests = testGroup "Tests"
  [ testCase "bar" ...
  , ifSucceeds "bar" $ testCase "foo" ...
  ]

ifSucceeds should interpret its argument as a pattern, I think.

The monadic syntax would be, respectively,

tests = testGroup "Tests" $ do
  bar <- testCase "bar" ...
  ifSucceeds bar $ testCase "foo" ...
  • The string-based syntax is, of course, prone to typos and fragile due to test renamings
  • The monadic syntax requires the test and its dependencies to be in the same block. Even though it's possible to return test identifiers from do-blocks, it's not very convenient, especially when there's several of such identifiers
  • There's also some repetition in the monadic syntax: we have to give a name to both the test ("bar") and its identifier (bar)

Other thoughts?

Also, what should the semantics be in the case when the dependency is excluded (e.g. by a pattern), but the dependent test is included? I think the depending test should still run. The logic is that presumably if the dependency fails, the dependent test would certainly fail, too, but since we don't know whether the dependency failed or not, there's no reason not to try the dependent test.

@sjoerdvisscher

This comment has been minimized.

Show comment
Hide comment
@sjoerdvisscher

sjoerdvisscher Jan 29, 2014

With the string-based syntax you could check all references before running any tests. Then typos would not be a problem in practice.

sjoerdvisscher commented Jan 29, 2014

With the string-based syntax you could check all references before running any tests. Then typos would not be a problem in practice.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach
Owner

feuerbach commented Jan 29, 2014

@sjoerdvisscher good point!

@jstolarek

This comment has been minimized.

Show comment
Hide comment
@jstolarek

jstolarek Jan 29, 2014

I agree with the reasoning about dependent tests.

Oh wait, can I exclude some tests from running in tasty? I wasn't aware of that.

jstolarek commented Jan 29, 2014

I agree with the reasoning about dependent tests.

Oh wait, can I exclude some tests from running in tasty? I wasn't aware of that.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Jan 29, 2014

Owner

@jstolarek yes, see http://documentup.com/feuerbach/tasty#options/patterns (you may want to skim through the rest of the document, too :)

Owner

feuerbach commented Jan 29, 2014

@jstolarek yes, see http://documentup.com/feuerbach/tasty#options/patterns (you may want to skim through the rest of the document, too :)

@joeyh

This comment has been minimized.

Show comment
Hide comment
@joeyh

joeyh Jan 29, 2014

If A creates a resource, before B uses it, and C cleans it up,
can a dependency ensure C is run any time A is run?
This might be a desirable property. OTOH, this is duplicating part of
resources (but as I mentioned in #47, it made sense for me to want to
run some tests as part of setting up a complicated resource).

Also, this may impact tasty-rerun. If B failed before, but A succeeded,
I'd want a rerun to still run A again, before rerunning B.

see shy jo

joeyh commented Jan 29, 2014

If A creates a resource, before B uses it, and C cleans it up,
can a dependency ensure C is run any time A is run?
This might be a desirable property. OTOH, this is duplicating part of
resources (but as I mentioned in #47, it made sense for me to want to
run some tests as part of setting up a complicated resource).

Also, this may impact tasty-rerun. If B failed before, but A succeeded,
I'd want a rerun to still run A again, before rerunning B.

see shy jo

@esmolanka

This comment has been minimized.

Show comment
Hide comment
@esmolanka

esmolanka Jan 29, 2014

The question is: should dependencies make arbitrary DAG or just a tree? (I guess, the former)

  • DAGs could be made with monadic approach and it's almost free, but there is pointed out earlier redundant names problem (foo <- testCase "foo");
  • Trees can be made with some clever combinators, it's also free and test references are anonymous, but this language could be too weak for some more complex cases;
  • String-typed references IMO are bad: need of tracking graph cycles, doing topological sorting, name checking and this is for a single benefit: no name duplication.

Example combinator for trees:

tests = testGroup "Tests"
  [ testCase "bar" ...
    |>  -- `ifSucceedsThen` 
       [ testCase "foo" ...
       , ...
       ]
  ]

I would try to do combination of monadic and combinator approach.

esmolanka commented Jan 29, 2014

The question is: should dependencies make arbitrary DAG or just a tree? (I guess, the former)

  • DAGs could be made with monadic approach and it's almost free, but there is pointed out earlier redundant names problem (foo <- testCase "foo");
  • Trees can be made with some clever combinators, it's also free and test references are anonymous, but this language could be too weak for some more complex cases;
  • String-typed references IMO are bad: need of tracking graph cycles, doing topological sorting, name checking and this is for a single benefit: no name duplication.

Example combinator for trees:

tests = testGroup "Tests"
  [ testCase "bar" ...
    |>  -- `ifSucceedsThen` 
       [ testCase "foo" ...
       , ...
       ]
  ]

I would try to do combination of monadic and combinator approach.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Jan 29, 2014

Owner

@joeyh having an edge type for ordering (run C no earlier than A or B finish) sounds reasonable.

Having an edge type that reverses the effect of a pattern (that may happen to exclude C but not A) — a bit less so.

(I'm trying to think whether these features make sense generally, outside of this particular use case.)

If I could replace tasty's resources with dependencies, it would be great, but it doesn't seem practically possible for many reasons.

Owner

feuerbach commented Jan 29, 2014

@joeyh having an edge type for ordering (run C no earlier than A or B finish) sounds reasonable.

Having an edge type that reverses the effect of a pattern (that may happen to exclude C but not A) — a bit less so.

(I'm trying to think whether these features make sense generally, outside of this particular use case.)

If I could replace tasty's resources with dependencies, it would be great, but it doesn't seem practically possible for many reasons.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Jan 29, 2014

Owner

@esmolanka I think you're talking not about DAGs vs trees (which doesn't matter here much — DAGs are fine), but about dependencies following the structure of the test tree itself, which I find too restrictive. Or maybe I just don't understand your point at all.

Topological sorting is necessary in any case.

and this is for a single benefit: no name duplication

Not at all. The main reason is that I don't want to change the main list-based syntax, because it would a rather significant change. Also, as I said above, passing identifiers out of a do-block is inconvenient.

Owner

feuerbach commented Jan 29, 2014

@esmolanka I think you're talking not about DAGs vs trees (which doesn't matter here much — DAGs are fine), but about dependencies following the structure of the test tree itself, which I find too restrictive. Or maybe I just don't understand your point at all.

Topological sorting is necessary in any case.

and this is for a single benefit: no name duplication

Not at all. The main reason is that I don't want to change the main list-based syntax, because it would a rather significant change. Also, as I said above, passing identifiers out of a do-block is inconvenient.

@jstolarek

This comment has been minimized.

Show comment
Hide comment
@jstolarek

jstolarek Jan 30, 2014

I think the problem is complicated by the fact that tests are already organized in a tree, while adding test dependencies organizes them into a dependency graph. So on the one hand there is this graph and on the other the test hierarchy.

My question is: what is the motivation behind organizing tests in a tree as they are now? I presume that this is supposed to reflect logical structure of tests as perceived by the user but the truth is that if there are no dependencies between tests they don't need to be in a tree.

jstolarek commented Jan 30, 2014

I think the problem is complicated by the fact that tests are already organized in a tree, while adding test dependencies organizes them into a dependency graph. So on the one hand there is this graph and on the other the test hierarchy.

My question is: what is the motivation behind organizing tests in a tree as they are now? I presume that this is supposed to reflect logical structure of tests as perceived by the user but the truth is that if there are no dependencies between tests they don't need to be in a tree.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Jan 30, 2014

Owner

Yes, that's just for the user convenience — for the same reasons we organize files into directories, although technically it's not necessary.

Also, it makes it easier to exclude or include a subtree for running, apply options to a subtree etc.

Owner

feuerbach commented Jan 30, 2014

Yes, that's just for the user convenience — for the same reasons we organize files into directories, although technically it's not necessary.

Also, it makes it easier to exclude or include a subtree for running, apply options to a subtree etc.

@ocharles

This comment has been minimized.

Show comment
Hide comment
@ocharles

ocharles Feb 2, 2014

Collaborator

I don't like the idea of doing this based on strings, so I would be in-favour of @feuerbach's original monad proposal. This feels to be the most type-safe and scope-safe way of doing this. The worry about a name being required might not actually be so bad in practice - if you can get a whole testGroup to depend on just one testCase, then maybe you can use composition to transparently pass the name through (like how we use >>=).

Collaborator

ocharles commented Feb 2, 2014

I don't like the idea of doing this based on strings, so I would be in-favour of @feuerbach's original monad proposal. This feels to be the most type-safe and scope-safe way of doing this. The worry about a name being required might not actually be so bad in practice - if you can get a whole testGroup to depend on just one testCase, then maybe you can use composition to transparently pass the name through (like how we use >>=).

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Feb 2, 2014

Owner

@ocharles could you explain what in particular you don't like about strings?

Here's a less stringy (but still dynamic) alternative: mark tests with values of any type you want (as soon as it's Typeable and Eq, and maybe Ord for efficiency), and refer to them by that identifier.

The worry about a name being required might not actually be so bad in practice

As I said above, this is not my biggest worry (just a nice bonus of not going monadic).

The main reason is that I don't want to change the main list-based syntax, because it would a rather significant change. Also, passing identifiers out of a do-block is inconvenient.

if you can get a whole testGroup to depend on just one testCase, then maybe you can use composition to transparently pass the name through

We can have such a combinator, too. So as soon as it is sufficient, you can have a perfectly static graph without any identifiers.

Owner

feuerbach commented Feb 2, 2014

@ocharles could you explain what in particular you don't like about strings?

Here's a less stringy (but still dynamic) alternative: mark tests with values of any type you want (as soon as it's Typeable and Eq, and maybe Ord for efficiency), and refer to them by that identifier.

The worry about a name being required might not actually be so bad in practice

As I said above, this is not my biggest worry (just a nice bonus of not going monadic).

The main reason is that I don't want to change the main list-based syntax, because it would a rather significant change. Also, passing identifiers out of a do-block is inconvenient.

if you can get a whole testGroup to depend on just one testCase, then maybe you can use composition to transparently pass the name through

We can have such a combinator, too. So as soon as it is sufficient, you can have a perfectly static graph without any identifiers.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Feb 4, 2014

Owner

Here's another angle to look at this problem at.

It seems to me that the majority of use cases follow the pattern: perform a linear sequence of actions (with possible data dependencies between them); while we're doing so, perform a number of checks and display the result of each check as a separate test.

If that is all we need, then perhaps we can do it easier than arbitrary test dependencies?
Something much like HUnit, where you can have multiple assertions per Assertion. Only in our case the successful assertions (and the first unsuccessful one) will render as separate tests in tasty.

It would look like:

testCase "Compound test" $ do
  a <- liftIO actionA
  check "a is good" (checkA a)
  b <- liftIO (actionB a)
  check "b is good" (checkB b)

And in the output you could see

Compound test
  a is good [OK]
  b is good [OK]

These subtests wouldn't be first-class: they are simple assertions (so they can't be quickcheck tests or test groups); they can't be selected with patterns. I think this is reasonable.

Are there any use cases that this approach wouldn't cover? (Or any other objections?)

(Hm, maybe this is what everyone was talking about, but I was too blind to see it?)

Owner

feuerbach commented Feb 4, 2014

Here's another angle to look at this problem at.

It seems to me that the majority of use cases follow the pattern: perform a linear sequence of actions (with possible data dependencies between them); while we're doing so, perform a number of checks and display the result of each check as a separate test.

If that is all we need, then perhaps we can do it easier than arbitrary test dependencies?
Something much like HUnit, where you can have multiple assertions per Assertion. Only in our case the successful assertions (and the first unsuccessful one) will render as separate tests in tasty.

It would look like:

testCase "Compound test" $ do
  a <- liftIO actionA
  check "a is good" (checkA a)
  b <- liftIO (actionB a)
  check "b is good" (checkB b)

And in the output you could see

Compound test
  a is good [OK]
  b is good [OK]

These subtests wouldn't be first-class: they are simple assertions (so they can't be quickcheck tests or test groups); they can't be selected with patterns. I think this is reasonable.

Are there any use cases that this approach wouldn't cover? (Or any other objections?)

(Hm, maybe this is what everyone was talking about, but I was too blind to see it?)

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Feb 5, 2014

Owner

In other words, this will be just another test provider (essentially an HUnit replacement). All that will have to be done in tasty itself is just support for displaying dynamic sub-tests/assertions.

Owner

feuerbach commented Feb 5, 2014

In other words, this will be just another test provider (essentially an HUnit replacement). All that will have to be done in tasty itself is just support for displaying dynamic sub-tests/assertions.

@ocharles

This comment has been minimized.

Show comment
Hide comment
@ocharles

ocharles Feb 5, 2014

Collaborator

I'm not quite sure how that's different to HSpecs, it type stuff. Could you elaborate on how it's different?

Collaborator

ocharles commented Feb 5, 2014

I'm not quite sure how that's different to HSpecs, it type stuff. Could you elaborate on how it's different?

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Feb 5, 2014

Owner

Does it have to be different? :) Indeed, it looks quite similar.

Owner

feuerbach commented Feb 5, 2014

Does it have to be different? :) Indeed, it looks quite similar.

@jstolarek

This comment has been minimized.

Show comment
Hide comment
@jstolarek

jstolarek Feb 13, 2014

they are simple assertions (so they can't be quickcheck tests or test groups)

I don't like that restriction. If I want to run a number of tests and only if all of them succeed run a particular test then I cannot organizy my tests into groups - I am forced to flatten the tree test into tree list. OTOH I'd rather have such feature now than something more sophisticated in a couple of months.

Are there any use cases that this approach wouldn't cover?

TBH I'm not sure about my use case. I want to test whether a program runs (with tasty-program) and if the program runs successfully I want to check whether output that it produced is exactly as expected (with tasty-golden).

jstolarek commented Feb 13, 2014

they are simple assertions (so they can't be quickcheck tests or test groups)

I don't like that restriction. If I want to run a number of tests and only if all of them succeed run a particular test then I cannot organizy my tests into groups - I am forced to flatten the tree test into tree list. OTOH I'd rather have such feature now than something more sophisticated in a couple of months.

Are there any use cases that this approach wouldn't cover?

TBH I'm not sure about my use case. I want to test whether a program runs (with tasty-program) and if the program runs successfully I want to check whether output that it produced is exactly as expected (with tasty-golden).

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Feb 13, 2014

Owner

If I want to run a number of tests and only if all of them succeed run a particular test then I cannot organizy my tests into groups - I am forced to flatten the tree test into tree list.

Can you give me a convincing practical example of such a case?

OTOH I'd rather have such feature now than something more sophisticated in a couple of months.

The simple version that I propose could be ready in a couple of months (unless someone else wants to do the work). Something graph-like would take much longer.

I want to test whether a program runs (with tasty-program) and if the program runs successfully I want to check whether output that it produced is exactly as expected (with tasty-golden).

I don't think this is a good idea. Why do you need these to be separate tests? Why not simply do this through the goldenTest function?

Golden tests are mostly about automated management of golden files. Suppose we want to update a golden file. How are we supposed to know that, in order to obtain the current output, we need to run those other tests?

Owner

feuerbach commented Feb 13, 2014

If I want to run a number of tests and only if all of them succeed run a particular test then I cannot organizy my tests into groups - I am forced to flatten the tree test into tree list.

Can you give me a convincing practical example of such a case?

OTOH I'd rather have such feature now than something more sophisticated in a couple of months.

The simple version that I propose could be ready in a couple of months (unless someone else wants to do the work). Something graph-like would take much longer.

I want to test whether a program runs (with tasty-program) and if the program runs successfully I want to check whether output that it produced is exactly as expected (with tasty-golden).

I don't think this is a good idea. Why do you need these to be separate tests? Why not simply do this through the goldenTest function?

Golden tests are mostly about automated management of golden files. Suppose we want to update a golden file. How are we supposed to know that, in order to obtain the current output, we need to run those other tests?

@jstolarek

This comment has been minimized.

Show comment
Hide comment
@jstolarek

jstolarek Feb 13, 2014

Why do you need these to be separate tests?

I prefer to have a report from the testsuite that says running a program has failed and there was no attempt to test its output, rather than have a golden test fail and having to examine manually whether it happened because program could not be run successfully or it could be run successfully but it produced unexpected output. So having two separate tests would be more convenient for me. It's the same reason why you don't put multiple assertions into a single unit test - atomicity of the test.

jstolarek commented Feb 13, 2014

Why do you need these to be separate tests?

I prefer to have a report from the testsuite that says running a program has failed and there was no attempt to test its output, rather than have a golden test fail and having to examine manually whether it happened because program could not be run successfully or it could be run successfully but it produced unexpected output. So having two separate tests would be more convenient for me. It's the same reason why you don't put multiple assertions into a single unit test - atomicity of the test.

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Feb 13, 2014

Owner

If the program fails, just make the error message say so. Would it be not clear enough?

Specifically, instantiate a in the type of goldenTest with Maybe ByteString. Make the value getter return Nothing when the program fails. Produce the appropriate error message in the comparison function.

Owner

feuerbach commented Feb 13, 2014

If the program fails, just make the error message say so. Would it be not clear enough?

Specifically, instantiate a in the type of goldenTest with Maybe ByteString. Make the value getter return Nothing when the program fails. Produce the appropriate error message in the comparison function.

@ozgurakgun

This comment has been minimized.

Show comment
Hide comment
@ozgurakgun

ozgurakgun Nov 20, 2014

Contributor

Any updates on this? If needed, I am happy to work on this because my tests for a project are taking quite some time, they are almost embarrassingly parallel, and it is becoming embarrassing to run them on a 32-core machine with no parallelism. :)

Cheers!

Contributor

ozgurakgun commented Nov 20, 2014

Any updates on this? If needed, I am happy to work on this because my tests for a project are taking quite some time, they are almost embarrassingly parallel, and it is becoming embarrassing to run them on a 32-core machine with no parallelism. :)

Cheers!

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Nov 20, 2014

Owner

@ozgurakgun can you describe your use case in more detail? what kind of tests are these, what dependency (and why) is there?

Owner

feuerbach commented Nov 20, 2014

@ozgurakgun can you describe your use case in more detail? what kind of tests are these, what dependency (and why) is there?

@ozgurakgun

This comment has been minimized.

Show comment
Hide comment
@ozgurakgun

ozgurakgun Nov 20, 2014

Contributor

@feuerbach thanks for the quick reply.

I guess the easiest way to describe my setup will be staying as closely as possible to the code.

testSpecs :: [(A, [B], [C])]
testA :: A -> TestTree
testB :: B -> TestTree
testC :: C -> TestTree

allTests :: TestTree
allTests = testGroup "all"
    [ testGroup "some_name" (testA a : map testB bs : map testC cs)
    | (a, bs, cs) <- testSpecs
    ]

Here, each entry in the top-level list is independent. They can be run in parallel.

In each entry, A needs to run first, then Bs can be run in parallel, then Cs can be run in parallel.

I guess what I need is a variation of the testGroup combinator, which will run the test trees in the list sequentially, exploiting parallelism for each item before starting the next. This idea is very similar to what has been suggested before in this thread. It may require restructuring the test specification in certain cases, but I think that is acceptable. At least in my current case.

I would be happy if I could write the following.

allTests :: TestTree
allTests = testGroup "all"
    [ testGroup_Sequentially "some_name" [testA a, bGroup, cGroup]
    | (a, bs, cs) <- testSpecs
    , let bGroup = testGroup "bs" (map testB bs)
    , let cGroup = testGroup "cs" (map testC cs)
    ]

(Please someone find a better name for testGroup_Sequentially!)

Contributor

ozgurakgun commented Nov 20, 2014

@feuerbach thanks for the quick reply.

I guess the easiest way to describe my setup will be staying as closely as possible to the code.

testSpecs :: [(A, [B], [C])]
testA :: A -> TestTree
testB :: B -> TestTree
testC :: C -> TestTree

allTests :: TestTree
allTests = testGroup "all"
    [ testGroup "some_name" (testA a : map testB bs : map testC cs)
    | (a, bs, cs) <- testSpecs
    ]

Here, each entry in the top-level list is independent. They can be run in parallel.

In each entry, A needs to run first, then Bs can be run in parallel, then Cs can be run in parallel.

I guess what I need is a variation of the testGroup combinator, which will run the test trees in the list sequentially, exploiting parallelism for each item before starting the next. This idea is very similar to what has been suggested before in this thread. It may require restructuring the test specification in certain cases, but I think that is acceptable. At least in my current case.

I would be happy if I could write the following.

allTests :: TestTree
allTests = testGroup "all"
    [ testGroup_Sequentially "some_name" [testA a, bGroup, cGroup]
    | (a, bs, cs) <- testSpecs
    , let bGroup = testGroup "bs" (map testB bs)
    , let cGroup = testGroup "cs" (map testC cs)
    ]

(Please someone find a better name for testGroup_Sequentially!)

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Nov 20, 2014

Owner

But why is there a dependency? Is it because Bs and Cs depend on the side effects of As? Or does A test some pre-condition, and if it fails, there's no point in running Bs and Cs?

Owner

feuerbach commented Nov 20, 2014

But why is there a dependency? Is it because Bs and Cs depend on the side effects of As? Or does A test some pre-condition, and if it fails, there's no point in running Bs and Cs?

@ozgurakgun

This comment has been minimized.

Show comment
Hide comment
@ozgurakgun

ozgurakgun Nov 20, 2014

Contributor

Actually, both. A produces some files as output, which are required for Bs. Hence, if A fails Bs and Cs do not need to be run. (Though I don't care too much if they are run anyway.)

Contributor

ozgurakgun commented Nov 20, 2014

Actually, both. A produces some files as output, which are required for Bs. Hence, if A fails Bs and Cs do not need to be run. (Though I don't care too much if they are run anyway.)

@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Nov 20, 2014

Owner

I see.

Here's how I see the design. First of all, I'm very reluctant at this point to make breaking changes in the core interface. Thus we should refer to other tests by their names, and there should be combinators along the lines of runBefore "test1", runAfter "test2" etc. These combinators create metainformation that is used to sort the tests topologically and then execute them in that order, also exploiting parallelism.

Also, as suggested by Sjoerd, there should be a check somewhere in the beginning that all references actually resolve.

I don't have resources to work on this atm (and in the foreseeable future), but you can totally try it yourself. I'm happy to review your early designs/prototypes.

Owner

feuerbach commented Nov 20, 2014

I see.

Here's how I see the design. First of all, I'm very reluctant at this point to make breaking changes in the core interface. Thus we should refer to other tests by their names, and there should be combinators along the lines of runBefore "test1", runAfter "test2" etc. These combinators create metainformation that is used to sort the tests topologically and then execute them in that order, also exploiting parallelism.

Also, as suggested by Sjoerd, there should be a check somewhere in the beginning that all references actually resolve.

I don't have resources to work on this atm (and in the foreseeable future), but you can totally try it yourself. I'm happy to review your early designs/prototypes.

@mightybyte

This comment has been minimized.

Show comment
Hide comment
@mightybyte

mightybyte Feb 9, 2015

I think I agree with @ocharles that the monadic interface feels like the way to go here. It seems like we need the most powerful abstraction possible at the lowest level, otherwise we keep end up running into these situations where as Ed says, "we can't say all the things".

mightybyte commented Feb 9, 2015

I think I agree with @ocharles that the monadic interface feels like the way to go here. It seems like we need the most powerful abstraction possible at the lowest level, otherwise we keep end up running into these situations where as Ed says, "we can't say all the things".

@quchen

This comment has been minimized.

Show comment
Hide comment
@quchen

quchen Dec 15, 2016

Bump because I’ve been missing this feature a lot for a long time now!

onSuccessRun :: TestTree -> TestTree -> TestTree
onSuccessRun precondition moreTests = ...
testGroup "foo"
    [ testCase "bar" ...
    , let preconditions = testTree "ipsum"
            [ ... ]
      in ifSucceeds preconditions (testCase "lorem" ...)
    ]

quchen commented Dec 15, 2016

Bump because I’ve been missing this feature a lot for a long time now!

onSuccessRun :: TestTree -> TestTree -> TestTree
onSuccessRun precondition moreTests = ...
testGroup "foo"
    [ testCase "bar" ...
    , let preconditions = testTree "ipsum"
            [ ... ]
      in ifSucceeds preconditions (testCase "lorem" ...)
    ]
@feuerbach

This comment has been minimized.

Show comment
Hide comment
@feuerbach

feuerbach Dec 16, 2016

Owner

@quchen as I said above,

I don't have resources to work on this atm (and in the foreseeable future), but you can totally try it yourself. I'm happy to review your early designs/prototypes.

The API you propose looks fine to me.

Owner

feuerbach commented Dec 16, 2016

@quchen as I said above,

I don't have resources to work on this atm (and in the foreseeable future), but you can totally try it yourself. I'm happy to review your early designs/prototypes.

The API you propose looks fine to me.

@quchen

This comment has been minimized.

Show comment
Hide comment
@quchen

quchen Dec 17, 2016

Oh, I missed that. But it’s good to know you’re open to such a feature, so contributors can be sure it is merged once they get it done. :-)

quchen commented Dec 17, 2016

Oh, I missed that. But it’s good to know you’re open to such a feature, so contributors can be sure it is merged once they get it done. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment