New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MT 6 roadmap #666
Comments
can we please not do this change ... this will break so many existing tests ... and it will make it less intuitive to use ... every other test framework I know of supports |
It doesn't break anything, yet. |
that's why I said "it will" ;) |
minitest/minitest#666 Use assert_nil if expecting nil from lib/minitest/spec.rb:23:in `must_equal'. This will fail in MT6.
I'm going to do this change. It will go out on a major rev. You are not being forced to update. Yes, this will cause some work for some tests, but it also exposes tautological assertions and is worth it in the long run. It has exposed real bugs both across my many projects as well as in github's tests. It is worth it. |
@pathawks there is no official roadmap. It's in my head. |
@zenspider it would be sweet of you to share what's in your head a little more, even be a bit structured about it. |
It would be sweet... it'd also be sweet to finish the minitest book and stay on top of my 90+ projects as well as do all my client work. OH! And have a social life and people-time to stay sane... Alas, I can't do all the things and I must prioritize what I actually do and when. If someone wants to sponsor some OSS time and prioritize minitest, then sure... I'd be happy to get right on that. Until then, it'll get done when it gets done (or not at all, maybe (probably)). |
I can't answer for budget allocation, as I don't have any to allocate, but I'd be happy to allocate time regularly, and I suspect there's a couple other folk who would. Any way to help you? Perhaps with #637 ? |
I'm pretty happy changing my In concrete terms, our test suites sometimes contain methods that wrap assertions to better capture our domain, e.g. def assert_domain_concept(a, b, c, object)
assert_equal a, object.method_relating_to_a, "error message referencing domain concept"
assert_in_delta b, c, object.method_relating_to_b_and_c, "another descriptive error"
# ...
end I can't see a way of avoiding this deprecation message (or indeed this breaking upon MT6) without adding conditionals into the method, e.g. def assert_domain_concept(a, b, c, object)
if a.nil?
assert_nil object.method_relating_to_a, "message"
else
assert_equal a, object.method_relating_to_a, "..."
end
# ...
end ... but given enough of these cases, I'd be sorely tempted to add a new method to our suites along the lines of: def assert_equal_or_nil(expected, *args)
if expected.nil?
assert_nil(*args)
else
assert_equal(expected, *args)
end
end ... which is just reintroducing something that MT seems to believe is a smell. Do you have any advice about how we should proceed to best fit in with the MT philosophy? |
This resolves a stern Minitest “warning” about an upcoming behavior change in MiniTest 6 that will result in the test failing. minitest/minitest#666
The warning doesn't make sense when actually comparing two things that happen to be nil. For example:
Is this change in MT6 purely philosophical or based on a breaking change? |
Yes. This change is a mistake if you cannot
where a might be nil. |
Can you provide any background for:
An example? I think many of us don't understand the intent here. |
It is up to the language and its methods to define the notion of "equality", not up to the testing framework:
Therefore, Also, when making assertions against variables, it is good practice to ensure they actually contain what you think they contain. Again, this is of no concern to the assertion of equality. I suggest introducing a new assertion for this behaviour, with an appropriate name which reflects the expectation of equality to a non- |
I regularly test methods using a set of expected return values based on a set of input values. The return value can be Please revert this change or suggest another way of writing these tests. I would like to keep up to date with MiniTest since it it awesome. |
I will not be reverting this change.
Yes, this is a basic description of testing. We all do it. This is good.
Here's where the problem starts to reveal itself. You're using known input values and have known expected output values. So when you know and expect it to be nil, you use
If you do not know when it is supposed to be nil, that's a separate and very important issue. It means you are not in control of your tests and you might have nils when you don't realize it. That is what this change is trying to expose. Tautological assertions provide a false sense of security and cover up potential bugs. I found a few across my projects when I was trying out this change. @tenderlove found bugs in his projects too (he proposed this change). Real bugs. In production. This is a good thing. Use this opportunity to look at why you're resisting the change to see if maybe you're making assumptions that cover up bugs in your code or thinking. @randycoulman wrote a good blog post about this (and maybe he wants to chime in here?): http://randycoulman.com/blog/2016/12/20/tautological-tests/ |
Some of us find it frequently convenient to do things like: { "input" => "expected_output",
"input_with_nil_expected" => nil }.each do |input, expected|
assert_equal expected, something.something(input)
end That's the pattern/use this breaks. But I get it, you think other considerations override this and won't be changing your mind, I'm not gonna try to argue it anymore, just wanted to clarify what people were talking about when they say things like "I regularly test methods using a set of expected return values based on a set of input values" -- I think this sort of thing is what several people in the thread are talking about, and how this change makes it that particular way of doing things no good anymore. |
@zenspider Thank you for your response. I am indeed looking looking to improve my code and thinking. I am changing all my The situations where I have a problem is typically in test helpers, not in individual tests. Since I did not include an example in my previous post, I went through my concrete examples, and I found that they are all in test helpers comparing multiple values. So, in all my cases I switch from individual I think it is slightly less readable, but better than having a check if the expected value is All in all, it is good enough for me, although it feels like a coincidence that all my cases where comparing multiple values. |
@jrochkind There's also quite a lot of us working with inherited code bases where the factories are just lazy 😄 After replacing all the explicit nil tests, it looks like the other places we're seeing this deprecation are places where the source data is also supplying |
I understand where you're trying to go with assert_nil, and I really agree overall, but please don't break equality testing with nil. Using assert_nil makes perfect sense when the programmer knows at write-time that the output of his test article is going to be nil. However, the programmer doesn't always know that! Your stance is going to force the programmer to be defensive about his tests, which will hurt readability, maintainability, and programmer satisfaction. Here's an example pattern that's extremely common in my code. This is clean, efficient, easy to read and update, and your proposal will break that. samples = { ... }
samples.each do |input, expected|
actual = frob(input)
assert_equal(expected, actual, "'#{input}' -> #{actual}, but should have been #{expected}")
end
|
OK. I almost buy your second argument -- that if I'm testing a range of inputs for their expected outputs and want to do so efficiently in a loop, not allowing an test for equality that happens to include nil is a bit of a pain. However, I reject the validity of the quoted statement above. If you don't know what your output will be for a given input, you're doing something very, very, very, very wrong. |
To me, it's the same argument. The loop that iterates over and asserts test cases is agnostic to what the test cases actually are. Let's say I have some test cases: samples = {
"foo" => "bar",
"Foo" => "Bar",
"baz" => "qux",
}
# assertion loop not included for brevity All is well and good here, but requirements changed or I forgot a corner case and I need to add a new test case: samples = {
"foo" => "bar",
"Foo" => "Bar",
"baz" => "qux",
".{-+" => nil,
}
# assertion loop not included for brevity BOOM! Tests have failed. Now I'm left with either completely refactoring this test (and making it VERY repetitive), or changing my assertion loop to match what @lazyatom mentioned above: samples.each do |input, expected|
actual = frob(input)
if expected.nil?
assert_nil(actual, "'#{input}' -> #{actual}, but should have been nil")
else
assert_equal(expected, actual, "'#{input}' -> #{actual}, but should have been #{expected}")
end
end This new assertion block is just complicated enough to fail "immediate recognition", which is really important when reading/writing code. Also, it violates DRY. If there's a better alternative than what I'm doing here, I'm all ears. As I touched on before, I'm a big fan of semantic assertions. They're a huge win in nearly all cases - I just think that forcing them has undesirable side effects that outweigh the benefit of encouraging adoption. |
I guess that I would argue that there's very little lost by having two entirely separate test cases, one that takes a hash of 'input' vs 'expected, and one that takes an array of inputs where 'expected' is always nil. That is, those things that are known to generate nil results get tested separately. Point is: much of the time, |
@mshappe I like what you've suggested, and I think I'm going to adopt that for when I'm testing invalid -> nil - but invalid inputs aren't the only case when mixed (nil, not nil) outputs are equally valid. In reality, I don't think it's possible to declare "nil should always be tested separately" as a general rule. We could keep throwing examples and counter examples at each other, but I doubt either of us would be satisfied. I'll just reiterate: I think the current plan is going to hurt MT and MT's community, and I urge you to abandon or modify that plan. |
I agree that assert_equal(foo, nil) should still be supported where foo is nil. It makes for an inconsistent API if it's not supported. Nil should be treated like a valid state. |
For the most part, as should be apparent here, I agree with @zenspider 's decision on philosophical grounds. That said, |
Man this is bordering on ridiculous. nil is a value. You can return it like any other value. Who are you to determine what nil means to me and my program? I suppose the fact that if you return NOTHING it's also nil, but that's ruby for you. |
@canoeberry reign it in on the tone, please. To answer your question: I am the author of minitest and I determine what nil means to minitest. What nil means to you and your program is between you and nil. |
I can't disagree more... If you don't know the value at the time you're writing the test, you're not testing, you're hoping. |
@joshuasiler it's |
To everyone: I actually appreciate the spiritedness of this thread, but I have to say that the objections to this seem really really overblown. (eta: try harder?) I was able to find and fix every case across many (> 75) projects in less than 10 minutes... (you might not believe that claim, but it's true. Also helps that I have my own CI locally) I even found a couple legit bugs in the process! |
@hovissimo btw... I think writing tests like that is terrible: samples = { ... }
samples.each do |input, expected|
actual = frob(input)
assert_equal(expected, actual, "'#{input}' -> #{actual}, but should have been #{expected}")
end That means you only ever get ONE failure at a time... No pattern matching, no signaling that something deeper has happened. Just a single breadcrumb. That sucks! Much better to use that loop to generate a bunch of test methods. Then you can see when multiple break at once and know that something bad happened deeper. |
We've been monkey patching Minitest to support this feature at GitHub for years. The place where it saves us is Of course this is an example of a tautological test, and yes, a tautological test is bad. But without the exception on This has been so helpful that the small effort it takes to explicitly call out the "I expect the return value to be nil" cases makes it worth while. If there were a way to tell the difference between the return value of
I could get behind this. OTOH it's so easy to add an |
Also I just wanted to say, I totally understand the "iterating over samples" case; I use it too. Just specifically calling out the "return value is nil" cases has been worth it to me for the safety this buys. Not to mention that, to me, returning nil is a Big Deal™ and I want those cases called out explicitly. |
…(tenderlove) [git-p4: depot-paths = "//src/minitest/dev/": change = 10830]
@tenderlove yours is the only example given in support of this change. But I feel that you're solving this problem at the wrong level. Instead of fundamentally changing the semantics of
Alternatively, you yourself have pointed out that it's easy to subclass And @zenspider, you're being so obtuse that I feel that you're deliberately misconstruing our criticisms.
What @hovissimo said made perfect sense. It's perfectly valid to expect that And what's so special about But, as you've said multiple times, it's your project, and you can do as you wish. The rest of us are all getting what we paid for. You won't hear from me again on this subject. |
If I could see the future, yes. But as I said, at the time of writing the test the author did not expect the return value to be nil. For example a function that couldn't possibly return nil and someone later changed it to return nil. |
For what it's worth, I do understand the tautological tests argument, and basically everything above about not being in control of when you expect
In the context of my example, when I write
Perhaps my example sits in a kind of grey area between these two zones of control... I understand if the position you're taking is that test authors who want to build up a set of "higher-level assertions" like this are on their own, but I felt it was worth being clear that there is a case where you might be simultaneously in control of your |
From @lazyatom:
... actually it does. You're already writing your own user defined assertion. You've already refactored your tests into custom assertions. Adding an The tautological test argument is for the cases where someone is just using This isn't for everyone. And for some, it'll be some work. We think that work is worth it in the long run. If you're not ready or willing for this change, then don't use unbounded dependencies on minitest on your gemfile. |
@lazyatom ... thank you for constructive dialog. Hyperbolic shit makes me want to lock down threads like this but people like you make it worth the noise. |
I'm going to add one more thought to this thread: both options are terrible. Wrong (link to gem) is the right way to do this. Creating entirely new DSLs and method libraries for something that can so easily be handled from within the language itself is silly.
|
This change did uncover a few incorrect tests for me. For those upgrading, three easy ways to upgrade your loop:
|
I do understand that tests need to be explicit and nil warrants its own assertion. That being said, for tests that evaluate whether e.g. a new object was copied properly from a fixture, one might like to test that fields will end up with the same respective values, whichever those are. I get that this has a notion of "not knowing what you are expecting" (hoping) and may mask errors, yet may also be a reasonable risk if you know that you're not changing the fixture itself. Explicitless also becomes kind of a pain if you'd like to factor out some common assertions to a function which will work OK 9/10 times but throw a deprecation warning that 1 out of 10 times where a nil is passed in as the expected value. |
I don't think anyone is going to add anything new to this thread, as evidenced above... Locking and closing... |
Where can I find a roadmap of expected changes in MT6?
The text was updated successfully, but these errors were encountered: