New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it wise to deliberately ignore certain refutations? #159
Comments
I don't understand this comment. Could you simplify your argument? It's not useful to If all you care about is that the method doesn't raise an exception, then the below method should suffice for all your applications: def useless_method(*args, &block)
# look mom, no exceptions!
end |
Yes, it is wise. http://blog.zenspider.com/blog/2012/01/assert_nothing_tested.html |
I need assert_nothing_raised, for example, when I have a method that raises an ArgumentError with a descriptive message for bad inputs, and want to ensure that there are no false positives (i.e. it doesn't raise an error on valid input). |
Yes, it is wise. No, you don't need Nobody needs |
In the case I mentioned where I'm testing for both false positives and negatives, an empty implementation would fail. If you were testing a validation method that returns true or false, you'd want to test both, wouldn't you? Regarding the fact that an error is reported, are you saying that the distinction between errors and failures is meaningless? If so, why are they reported separately? |
I'd absolutely want to test both. And I'd test that it returned true and returned false. I wouldn't test that it returned true and then didn't raise an exception. See the difference? No, I never ever implied that the distinction between failures and errors is meaningless. Quite the opposite. I'm not sure where you got that. If you can point to it, I'll try to clarify. |
In my situation, it's useful to consider the method being tested a validation method that does and returns nothing on success but raises an exception on invalid input. Regarding the difference between testing methods that report correct inputs by returning true vs. reporting it by not raising an exception, from the point of view of wanting to test correct program behavior, I would want to test both. I do agree with you that there are naive uses of testing for no exception raised, but I don't think this is one of them. My objective is to see that an error is thrown when conditions warrant it but not when they don't. I'm not understanding why you see the former as worthy of testing but the latter not... ...unless your statement that "Minitest (and all test frameworks) reports uncaught exceptions as errors, so there is an implied assert_nothing_raised around every test method" is intended to imply that I should put the bare method call in a test method, in which case if it raised an exception it would raise an error rather than reporting a test failure (right?). Did I misunderstand you? In any case, assert_nothing_raised reported failures, not errors, didn't it? I would think that reporting the error I describe as a test failure would be more accurate than reporting it as an error, no? |
There's no such thing as a method that returns nothing. And I assume in your "returns nothing" scenario, that the method has side-effects. So test them. There is zero valid reason to use In your described scenario, I'd have N positive tests, where N is the product of the number of edge cases per argument (I'll assume none) and relevant ivar state (which I'll fake via "setup" methods), and M negative tests, where M is the number of negative edge cases I can have: def test_mymethod_positive_edgecase1
obj.setup_edgecase1
obj.mymethod # or assert_nil obj.mymethod, or assert_equal 42, obj.mymethod
assert_equal 42, obj.side_effect_accessor
end
def test_mymethod_negative_edgecase1
obj.setup_bad_state # or this could be via omission of setup, in which I'd comment it out and state the intent
assert_raises SomeException do
obj.mymethod
end
end Clean, straightforward testing. It says what it should do. It is clean. It is understandable. It is refactorable.
This is an error in the way you think about testing. In both my cases above, I'm testing for what should happen, not testing that it doesn't do what it shouldn't. Testing that something doesn't do what it shouldn't doesn't actually verify that it does what it should do correctly. |
Thanks for taking the time to have this conversation. I appreciate the time and effort you invest in open source software. To clarify my intention, let's use this method to illustrate: # Assert that the specified number is a legal value with which to
# instantiate a NXT type bitmap. Raise on error, do nothing on success.
def assert_legal_bitmap_value(number)
max_value = NxtTypes::MAX_BITMAP_NUMBER_VALUE
if number > max_value
raise ArgumentError.new("Bitmap maximum value is #{max_value} (0x#{max_value.to_s(16)}).")
end
if number & 1 == 1
raise ArgumentError.new("Bitmap number must not have low bit set.")
end
end Although this method's name begins with 'assert', it is in the production code, not the test code. It's used in several places and is extracted into a separate method for DRYness. Its sole purpose is to raise on bad input; if input is good there are no side effects. Yes, it does return a value, nil, but that is the only value it can possibly return, so the return value adds no information to the fact that it returned at all. That's what I meant by returns nothing, that figuratively (though not literally) it's not returning any information. If I were to use your test_mymethod_positive_edgecase1, then do I understand correctly that you're saying I should test for the correct return value (nil in this case)? If so, then if the method behaves incorrectly and raises an exception, then that will be a test error rather than a test failure, won't it? Since the specific thing I'm testing for is that an error is not raised, that seems wrong to me. Maybe I'm putting too much value on the distinction between those two? By the way, after more thinking about this issue, I realized that what I wanted was an assert_not_raised(exception_class) that would fail if exception_class was raised, but error if any other exception type was raised. |
Yes, I think that you're too hung up on failure vs error. I've said my piece as clearly as I can. I'm clearly not getting my point across to you, so I'm just going to leave it be at this point. |
See discussion in minitest/minitest#159.
@keithrbennett Yes, you should test for This solution is more readily obvious is you think of your "tests" as really being specifications; examples of how your code should behave. I.e., BDD. |
I have implemented refute_raises() and before committing, pushing, and doing a merge request, felt I should look through the code and any existing issues which might make reference to this or similar, since it seems sufficiently obvious that it was not mere oversight.
Hence I have noted issue refute_block method #112, but felt, as did the issuer, that the response was insufficient. Perhaps it is the vagaries of open source to have such annoyances, but there it is.
Also I have noted the following from test_minitest_unit.rb:
The above is really simply a repetition of your statements in refute_block method #112, or the other way round, such that even if the issuer of refute_block method #112 had read the above in the code, which apparently he had not, he would regardless of the exchange in refute_block method #112 be none the wiser.
Rather than a bunch of hand waving about how certain methods are "useless", how about we be more specific and give examples of where either the desired tests are able to be composed more simply or at least composed from the existing toolset, or that the desired test itself is useless and why.
I don't find the following useless:
It is perfectly reasonable to assert that something will happen or that it won't happen.
When it comes to testing the interface of something, it is desired to be able to test that certain objects, messages, sequences are acceptable and certain objects, messages, sequences are not acceptable. If this is of the second kind of "useless", then I have to disagree, since I do want to have test coverage which anticipates the receipt of some possible inputs which are considered unacceptable.
It is admitted that it is not possible to assert everything that will not happen, since that would be a lot of things. If such were to be attempted this may be a reasonable start however... test_that_a_giant_red_spider_does_not_ask_the_code_gremlin_to_come_and_eat_my_face(). I'd need to implement the following first though:
The text was updated successfully, but these errors were encountered: