Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it wise to deliberately ignore certain refutations? #159

Closed
thoran opened this issue Sep 1, 2012 · 11 comments
Closed

Is it wise to deliberately ignore certain refutations? #159

thoran opened this issue Sep 1, 2012 · 11 comments

Comments

@thoran
Copy link

thoran commented Sep 1, 2012

  1. I have implemented refute_raises() and before committing, pushing, and doing a merge request, felt I should look through the code and any existing issues which might make reference to this or similar, since it seems sufficiently obvious that it was not mere oversight.

  2. Hence I have noted issue refute_block method #112, but felt, as did the issuer, that the response was insufficient. Perhaps it is the vagaries of open source to have such annoyances, but there it is.

  3. Also I have noted the following from test_minitest_unit.rb:

    # These don't have corresponding refutes _on purpose_. They're
    # useless and will never be added, so don't bother.
  1. The above is really simply a repetition of your statements in refute_block method #112, or the other way round, such that even if the issuer of refute_block method #112 had read the above in the code, which apparently he had not, he would regardless of the exchange in refute_block method #112 be none the wiser.

  2. Rather than a bunch of hand waving about how certain methods are "useless", how about we be more specific and give examples of where either the desired tests are able to be composed more simply or at least composed from the existing toolset, or that the desired test itself is useless and why.

  3. I don't find the following useless:

describe 'behaviour_one' do

  before do
    @a = [3,1,2,nil,1,2,3,4,5,6,nil]
  end

  it 'must accept an array' do
    lambda{behaviour_one(@a)}.wont_raise Exception
  end

end
  1. It is perfectly reasonable to assert that something will happen or that it won't happen.

  2. When it comes to testing the interface of something, it is desired to be able to test that certain objects, messages, sequences are acceptable and certain objects, messages, sequences are not acceptable. If this is of the second kind of "useless", then I have to disagree, since I do want to have test coverage which anticipates the receipt of some possible inputs which are considered unacceptable.

  3. It is admitted that it is not possible to assert everything that will not happen, since that would be a lot of things. If such were to be attempted this may be a reasonable start however... test_that_a_giant_red_spider_does_not_ask_the_code_gremlin_to_come_and_eat_my_face(). I'd need to implement the following first though:

class GiantRedSpiderAsksCodeGremlinToEatMyFaceError < RuntimeError; end
  1. There are negative assertions and positive assertions in English, so why should Ruby not have them too?
@tenderlove
Copy link

I don't understand this comment. Could you simplify your argument?

It's not useful to assert_nothing_raised because you always expect your code to not raise an exception. If it does raise an exception, you'll get an E for that test case, and you need to fix it. Instead, you should be asserting that the method performs the duty which you expect.

If all you care about is that the method doesn't raise an exception, then the below method should suffice for all your applications:

def useless_method(*args, &block)
  # look mom, no exceptions!
end

@zenspider
Copy link
Collaborator

@keithrbennett
Copy link

I need assert_nothing_raised, for example, when I have a method that raises an ArgumentError with a descriptive message for bad inputs, and want to ensure that there are no false positives (i.e. it doesn't raise an error on valid input).

@zenspider
Copy link
Collaborator

Yes, it is wise. No, you don't need assert_nothing_raised. Minitest (and all test frameworks) reports uncaught exceptions as errors, so there is an implied assert_nothing_raised around every test method. By using assert_nothing_raised as your only assertion, you're saying "I don't care what this does, even if that is nothing at all, as long as it doesn't raise". I can make that pass simply by deleting your entire implementation.

Nobody needs assert_nothing_raised, it is the water-wings of coding: a dangerous false sense of security. By not having it in minitest, I'm encouraging you to actually test what your code should be doing, not what it isn't.

@keithrbennett
Copy link

In the case I mentioned where I'm testing for both false positives and negatives, an empty implementation would fail. If you were testing a validation method that returns true or false, you'd want to test both, wouldn't you?

Regarding the fact that an error is reported, are you saying that the distinction between errors and failures is meaningless? If so, why are they reported separately?

@zenspider
Copy link
Collaborator

I'd absolutely want to test both. And I'd test that it returned true and returned false. I wouldn't test that it returned true and then didn't raise an exception. See the difference?

No, I never ever implied that the distinction between failures and errors is meaningless. Quite the opposite. I'm not sure where you got that. If you can point to it, I'll try to clarify.

@keithrbennett
Copy link

In my situation, it's useful to consider the method being tested a validation method that does and returns nothing on success but raises an exception on invalid input.

Regarding the difference between testing methods that report correct inputs by returning true vs. reporting it by not raising an exception, from the point of view of wanting to test correct program behavior, I would want to test both. I do agree with you that there are naive uses of testing for no exception raised, but I don't think this is one of them. My objective is to see that an error is thrown when conditions warrant it but not when they don't. I'm not understanding why you see the former as worthy of testing but the latter not...

...unless your statement that "Minitest (and all test frameworks) reports uncaught exceptions as errors, so there is an implied assert_nothing_raised around every test method" is intended to imply that I should put the bare method call in a test method, in which case if it raised an exception it would raise an error rather than reporting a test failure (right?). Did I misunderstand you? In any case, assert_nothing_raised reported failures, not errors, didn't it? I would think that reporting the error I describe as a test failure would be more accurate than reporting it as an error, no?

@zenspider
Copy link
Collaborator

it's useful to consider the method being tested a validation method that does and returns nothing on success but raises an exception on invalid input.

There's no such thing as a method that returns nothing. And I assume in your "returns nothing" scenario, that the method has side-effects. So test them. There is zero valid reason to use assert_nothing_raised. I'll try to illustrate that by showing how I'd test your situation.

In your described scenario, I'd have N positive tests, where N is the product of the number of edge cases per argument (I'll assume none) and relevant ivar state (which I'll fake via "setup" methods), and M negative tests, where M is the number of negative edge cases I can have:

def test_mymethod_positive_edgecase1
  obj.setup_edgecase1

  obj.mymethod # or assert_nil obj.mymethod, or assert_equal 42, obj.mymethod

  assert_equal 42, obj.side_effect_accessor
end

def test_mymethod_negative_edgecase1
  obj.setup_bad_state # or this could be via omission of setup, in which I'd comment it out and state the intent

  assert_raises SomeException do
    obj.mymethod
  end
end

Clean, straightforward testing. It says what it should do. It is clean. It is understandable. It is refactorable.

My objective is to see that an error is thrown when conditions warrant it but not when they don't.

This is an error in the way you think about testing. In both my cases above, I'm testing for what should happen, not testing that it doesn't do what it shouldn't. Testing that something doesn't do what it shouldn't doesn't actually verify that it does what it should do correctly.

@keithrbennett
Copy link

Thanks for taking the time to have this conversation. I appreciate the time and effort you invest in open source software.

To clarify my intention, let's use this method to illustrate:

        # Assert that the specified number is a legal value with which to
        # instantiate a NXT type bitmap.  Raise on error, do nothing on success.
        def assert_legal_bitmap_value(number)
          max_value = NxtTypes::MAX_BITMAP_NUMBER_VALUE
          if number > max_value
            raise ArgumentError.new("Bitmap maximum value is #{max_value} (0x#{max_value.to_s(16)}).")
          end
          if number & 1 == 1
            raise ArgumentError.new("Bitmap number must not have low bit set.")
          end
        end

Although this method's name begins with 'assert', it is in the production code, not the test code. It's used in several places and is extracted into a separate method for DRYness. Its sole purpose is to raise on bad input; if input is good there are no side effects.

Yes, it does return a value, nil, but that is the only value it can possibly return, so the return value adds no information to the fact that it returned at all. That's what I meant by returns nothing, that figuratively (though not literally) it's not returning any information.

If I were to use your test_mymethod_positive_edgecase1, then do I understand correctly that you're saying I should test for the correct return value (nil in this case)? If so, then if the method behaves incorrectly and raises an exception, then that will be a test error rather than a test failure, won't it? Since the specific thing I'm testing for is that an error is not raised, that seems wrong to me. Maybe I'm putting too much value on the distinction between those two?

By the way, after more thinking about this issue, I realized that what I wanted was an assert_not_raised(exception_class) that would fail if exception_class was raised, but error if any other exception type was raised.

@zenspider
Copy link
Collaborator

Yes, I think that you're too hung up on failure vs error. I've said my piece as clearly as I can. I'm clearly not getting my point across to you, so I'm just going to leave it be at this point.

tgxworld added a commit to tgxworld/rails that referenced this issue Mar 23, 2015
@dogweather
Copy link

@keithrbennett Yes, you should test for nil when given valid input. Because that's the specified "positive" behavior. And when you test for nil, you'll get "assert nothing raised" for free.

This solution is more readily obvious is you think of your "tests" as really being specifications; examples of how your code should behave. I.e., BDD.

@minitest minitest locked and limited conversation to collaborators May 17, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

No branches or pull requests

5 participants