Single expectation test #5

andreareginato opened this Issue Oct 3, 2012 · 27 comments


None yet

Write your thoughts about the "single expectation test" best practice.

sj26 commented Oct 3, 2012

This can, however, be the bane of performant testing.


I'm fairly new and inexperienced with Rspec and especially where there is some significant set up it can make it very slow. I have sometimes written as separate tests during TDD, got it working and then written a combined test to leave running in CI process/automatic testing and commented out the separate tests. If the combined test fails I can quickly activate the separate tests to find the exact issue.


I sometimes use an assertion to verify that the conditions are right for the test to be valid. For instance, on one project, we had a very complicated set of fixtures, in several related models. There were about a dozen devs, so we were often stepping on each other's fixtures. When I tested a scope, I would usually do "it finds only those that fit" and "it omits only those that don't fit", by looping through the found or omitted records, and verifying that they fit or not. However, this tactic depends on there being ANY records that do or don't fit. So, I preceded the loop with a test that the set contained at least one record. One of our pull-request-approvers regularly made the perfect the enemy of the good, strictly enforcing "one assert per test", and rejected that style, so I made the existence-check a separate test, much slower. (To those who say putting the assertion in a loop was already a violation, yes I could have set a flag upon error and checked it at the end of the loop, but that cuts out a lot of clue about which record violated it. Yes, I could have initted something to nil and set it to the offending record's id on error, but still....)


I think this best practice, as written, is misleading and not helpful. Here's what I'd say instead:

  • In isolated unit specs, you want each example to specify one (and only one) behavior. Multiple expectations in the same example are a signal that you may be specifying multiple behaviors.
  • In tests that are not isolated (e.g. ones that integrate with a DB, an external webservice, or end-to-end-tests), you take a massive performance hit to do the same setup over and over again, just to set a different expectation in each test. In these sorts of slower tests, I think it's fine to specify more than one isolated behavior.

As in all things, software engineering is about tradeoffs, and this is no exception. It's bad to encourage cargo-cult thinking here.


One of the primary reasons for the "single assertion per test" is to provide easy and understandable traceability for what caused the failure, and keep multiple errors from mingling together and confusing this feedback.
While I do agree that single assertion tests are generally a good thing, I warn people to stay away from it as a guideline and instead focus on the goal: making test failures point directly to the cause of the error.


๐Ÿ‘ to @myronmarston


Later you make the point that you should have custom matchers. That is actually a very pragmatic suggestion that accomplishes a similar goal to this suggestion.

pelle commented Dec 13, 2012

Updated using the @myronmarston description.
Any correction and comment to the updated guideline is appreciated.




+1 to @myronmarston (I was going to say exactly that :D)


Please consider updating this spec as assign_to matcher is no longer part of the latest version of shoulda. More details here



Why so? The text states that it should say 'GOOD', so why do you think it isn't? You could be right of course, but why?


Guys, if you think of any change, please send us a pull request with the update.


Where,and in which versions, is the is_expected_to message defined? I can't find it in the rspec API docs. Is it meant to be locally defined by the user?


@andreareginato according to Myron's reply, the message is is_expected. How did you get is_expected_to (with 2 underscores) to work for you? (Is it a typo?)


Yep. It was a typo. Sorry for that. Now it's up and running with the needed corrections.

subject { color }
it "is blue" do
  expect(subject.RGB.R).to be < 0.2
  expect(subject.RGB.G).to be < 0.2
  expect(subject.RGB.B).to be > 0.8

In this example, I just want to test that some action gives me the color blue. But the only way I have to test that it's blue is by its RBG components. If I don't break the single expectation test pattern, I loose the "why" I have this test. It is important for me that the reason I want this is that I want to make sure the color is some kind of blue.

What is wrong with this reasoning?


What is wrong with this reasoning?

Absolutely nothing. Your example is specifying one behavior, and that's what's important. The fact that it takes 3 expectations to do isn't particularly important.

That said, in RSpec 3.1+, you could use a single expectation:

expect(subject.RGB).to have_attributes(
  R: a_value < 0.2,
  G: a_value < 0.2,
  B: a_value > 0.8
dayyan commented Apr 21, 2015

FYI the example method respond_with_content_type has been deprecated in RSpec 2.0.

@purbon purbon referenced this issue in logstash-plugins/logstash-codec-netflow May 3, 2015

json serialisation fix and initial specs #8


While I do agree that single assertion tests are generally a good thing, I warn people to stay away from it as a guideline and instead focus on the goal: making test failures point directly to the cause of the error.

@coreyhaines would you agree that aggregate_failures (introduced in 3.3!) addresses this concern?

The "not isolated" example could be updated to include it:

it 'creates a resource', :aggregate_failures do
  expect(response).to respond_with_content_type(:json)
  expect(response).to assign_to(:resource)

Relevant docs:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment