Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consider table-driven tests #392

Closed
petertseng opened this issue Nov 13, 2017 · 3 comments · Fixed by #394
Closed

consider table-driven tests #392

petertseng opened this issue Nov 13, 2017 · 3 comments · Fixed by #394

Comments

@petertseng
Copy link
Member

In many tests, we have some number of input/output pairs, and the validation we perform on the input/output pairs is the same. For the purpose of not repeating ourselves, we often extract the validation into one function.

For further deduplication, and for not adding so many test functions, we may consider the approach taken in https://github.com/exercism/rust/blob/master/exercises/perfect-numbers/tests/perfect-numbers.rs, a table-driven test.

Let us decide some set of circumstances under which table-based tests are ideally to be used (all the time, none of the time, or something in between).

@coriolinus
Copy link
Member

coriolinus commented Nov 13, 2017

I am opposed to table-based testing, because it hides information from the person running the tests.

  • Any subsequent tests which might also have failed are simply never run.
  • Users can see that a certain assert failed, and the parameters which failed, but they don't necessarily get a named test description or comment explaining what the test was meant to verify about their code. Worst case with individual tests is that they get at least a somewhat-descriptive function name.
  • Users who prefer to develop incrementally instead of simply removing all the #[ignore] lines at once are denied this option in table-based testing. There's only one such line to remove, after which all cases are run.

If Rust or Cargo supported the notion of sub-tests, I would withdraw this objection. Sub-tests don't address my second and third objections, but they do solve the first, and that's the most serious one. However, AFAIK Rust does not have this feature and it isn't on the roadmap.

The Rusty way to approach this problem would be to use macros. We could, I suppose, develop and publish an official exercism Rust test macro; it's not hard to imagine what it would look like:

macro_rules! test {
    ($test_name:ident, $property_test_func:ident, $( $param:expr ),*) => {
        #[test]
        fn $test_name() {
            $property_test_func($( $param ),* )
        }
    }
}

macro_rules! tests {
    ($property_test_func:ident => $( $test_name:ident, $( $param:expr ),* );+ ) => {
        $(
            test!($test_name, $property_test_func, $( $param ),*)
        )+
    }
}

However, that would introduce either explicit dependencies to some exercism-affiliated crate, or a copy-paste job by the exercise author. In either case, it obfuscates to a new student exactly how their code is being tested.

Given that it would probably need to be implemented within the test file anyway, I'd prefer to leave such a macro to the individual exercise implementers.

In short, I'm opposed to table-based testing and would be inclined to request changes from a PR which used it. Macros can give the appearance of table-based testing while generating a full real test suite behind the scenes, but I believe that the implementation of the relevant macro should be left to the exercise implementer.

@ijanos
Copy link
Contributor

ijanos commented Nov 13, 2017

I also favor more granular test cases, a test called test_all just feels wrong. What does it mean if that test fails? All failed? Probably not. Something failed. I feel table-based test cases are an anti-pattern and we should not promote them.

coriolinus added a commit to coriolinus/exercism_rust that referenced this issue Nov 13, 2017
This PR is in response to exercism#392.

Perfect-numbers currently uses table-based testing. For reasons outlined
in exercism#392 (comment)
I believe that table-based testing is not the appropriate way to do things.

This is what macro-based tests might look like. Are they abstruse and
opaque? Yes. However, they at least become individual tests, and I
believe that this is _better than the status quo_.
coriolinus added a commit to coriolinus/exercism_rust that referenced this issue Nov 13, 2017
This PR is in response to exercism#392.

Perfect-numbers currently uses table-based testing. For reasons outlined
in exercism#392 (comment)
I believe that table-based testing is not the appropriate way to do things.

This is what macro-based tests might look like. Are they abstruse and
opaque? Yes. However, they at least become individual tests, and I
believe that this is _better than the status quo_.
@petertseng
Copy link
Member Author

Ah good, that's a majority of us. This is excellent. I believe we are in a good position to say table-based tests are not for this track.

To my knowledge, perfect-numbers is the only exercise that uses them, therefore this issue should be closed when something makes perfect-numbers no longer use them.

This issue should be used to refer to in the future when queried about table-based tests.

Considerations that can cause this to change in the future:

Any subsequent tests which might also have failed are simply never run.

I find that https://users.rust-lang.org/t/table-driven-aka-data-driven-testing/3848 makes the same observation. It seems unlikely that the model of using assert, which panics, will change. So some other way of supporting showing all the results would be needed.

My own MO with most exercises is simply to run cargo test && cargo test -- --ignored, so I'm basically running all tests at once. So I would like to see as many failures at once as I possibly can.

Users can see that a certain assert failed, and the parameters which failed, but they don't necessarily get a named test description or comment explaining what the test was meant to verify about their code

It may be possible to use the optional panic message argument to assert and assert_eq to explain individual cases, but I have not tested how this looks on an actual failure. Investigation of this would be required.

Users who prefer to develop incrementally instead of simply removing all the #[ignore] lines at once are denied this option in table-based testing.

Ironically, mashing all the tests into one currently makes life easier for those who prefer this approach, since now there's only one #[ignore] to remove (don't need to constantly edit the test file), but assert will make such that only the first failure is displayed. But if the first problem is solved (there should be a way to cause subsequent failed tests to be displayed) then we would need to ensure there is still a way to achieve this behaviour.

coriolinus added a commit to coriolinus/exercism_rust that referenced this issue Nov 19, 2017
This PR is in response to exercism#392.

Perfect-numbers currently uses table-based testing. For reasons outlined
in exercism#392 (comment)
I believe that table-based testing is not the appropriate way to do things.

This is what macro-based tests might look like. Are they abstruse and
opaque? Yes. However, they at least become individual tests, and I
believe that this is _better than the status quo_.
coriolinus added a commit to coriolinus/exercism_rust that referenced this issue Nov 20, 2017
This PR is in response to exercism#392.

Perfect-numbers currently uses table-based testing. For reasons outlined
in exercism#392 (comment)
I believe that table-based testing is not the appropriate way to do things.

This is what macro-based tests might look like. Are they abstruse and
opaque? Yes. However, they at least become individual tests, and I
believe that this is _better than the status quo_.

Squashed commits:
- Conditionally disable count-ignores.sh; disable for perfect-numbers
- Document .meta/ignore-count-ignores
- Update to use inline ignores
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants