-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test the unhappy path #60
Conversation
test/meta_testing_tools.jl
Outdated
Invoking it via `metatest_get_failures(f)` will prevent those `@test` being added to the | ||
current testset, and will return a collection of all nonpassing test results. | ||
""" | ||
function metatest_get_failures(f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this just be called
function metatest_get_failures(f) | |
function get_failures(f) |
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, extract_failures
or fail_results
(since they are Test.Fail <: Test.Result
) or just failures
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are any non-pass.
So they could be Errors, or some other cause, I don't recall what @test_broken
returns.
Arguably we should call it nonpasses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah nonpasses
is better (to avoid the failures
= Fail
confusion i had)
test_broken
also treats all non-passes the same
julia> @test_broken 1 == 2
Test Broken
Expression: 1 == 2
julia> @test_broken error()
Test Broken
Expression: error()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(reviewed with @bencottier)
one design question, and a couple naming comments, but otherwise i think the broadly implementation looks good -- i've asked Ben to follow-up with any comments on the implementation later this afternoon
overall Alt 1 and (probably) Alt 2 both seem less nice to us
test/testers.jl
Outdated
@testset "unhappy path" begin | ||
alt_double(x) = 2x | ||
@scalar_rule(alt_double(x), 3) # this is wrong, on purpose | ||
failures = metatest_get_failures(()->test_scalar(alt_double, 2.1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to test that test_scalar
returns a test failure (rather than a pass or an error), when the @scalar_rule
is wrong -- is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wonder if the testset name can be more precise on that, than "unhappy path"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to make sure it doesn't succeed.
I can't think of a better name.
Unhappy Path is a term of literature
https://cucumber.io/blog/test-automation/happy-unhappy-paths-why-you-need-to-test-both/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to make sure it doesn't succeed.
i am confused by this -- why don't we only care about failures?
(also current naming is confusing to me if it includes more than Test.Fail
s)
https://github.com/JuliaDiff/ChainRulesTestUtils.jl/pull/60/files#r501079347
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is also why i prefer to be more precise than "unhappy path" ...unless we explicitly document that "unhappy path" means "anything that doesn't result in only Test.Pass
"
test/meta_testing_tools.jl
Outdated
Invoking it via `metatest_get_failures(f)` will prevent those `@test` being added to the | ||
current testset, and will return a collection of all nonpassing test results. | ||
""" | ||
function metatest_get_failures(f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, extract_failures
or fail_results
(since they are Test.Fail <: Test.Result
) or just failures
?
test/testers.jl
Outdated
return 2.5 * x, identity_pullback | ||
end | ||
|
||
@test !isempty(metatest_get_failures(()->frule_test(my_identity1, (2.2, 3.3)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do wonder if there's an alternative design (or just another layer of abstraction?)
so we end up with a call more like
@test_failure frule_test(my_identity1, (2.2, 3.3))
or
@test fails(() -> frule_test(my_identity1, (2.2, 3.3))
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah i think the later makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
out of interest, why not the macro (that just wraps the call in an anonymous function and passes it to fails
)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the macro looks like @test_throws
and @test_broken
but it does not at all work like those do.
@test_throws
catchs the exception.
@test_broken x
is very similar to @test !x
+ some extra stuff
I would worry that people might think @test_failure x
is the same as same as @test !x
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
people might think
@test_failure x
is the same as same as@test !x
yeah, great point.
Co-authored-by: Nick Robinson <npr251@gmail.com>
@nickrobinson251 you didn't comment on if either of the Alternative Implementations mentioned in the top post might be better. |
We did (above). We think Alt 1 is clearly worse. Alt 2 we didn't think too hard about but I couldn't see what we'd gain over the current implementation |
Ah i missed it
|
_extract_failures(ts::Test.DefaultTestSet) = _extract_failures(ts.results) | ||
function _extract_failures(xs::Vector) | ||
if isempty(xs) | ||
return Test.Result[] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If f
actually doesn't call @test
, we end up here -- is this the best thing to return in that case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💚
Co-authored-by: Nick Robinson <npr251@gmail.com>
Not going to tag a release off this as it is not a functional change |
This PR is about testing that we can actually detect when things are wrong.
The code for
metatest_get_failures
is a pretty nasty hack an on the internals.I think its clear enough what it is doing.
Two possible alternative implementations:
Alt 1: use a seperate process.
Julia itself does this to test
@test
.Writing a test to a file and then kicking it off with
run(`julia -e file.jl`)
But three problems are
Alt 2: custom Testset type
We could declare a custom testset type which overloads
record
andfinish
so that they don't propate errors upwards.This might be fiddly, but I have done something like it before in https://github.com/JuliaTesting/ReferenceTests.jl.
so could base it off that.
It should be simplier than that.
It could be pretty clean maybe. Not 100% sure.