Unit spec timeout #15

merged 2 commits into from Apr 12, 2013

4 participants


This branch adds a 100ms timeout on all unit tests. This is mainly to open up discussion on whether or not limiting the unit test runtime will result in better tests.

@mbj, @snusnu, @solnic I would love to get input from you on this especially.

dkubb added some commits Apr 6, 2013
@dkubb dkubb Add a timeout that causes a spec failure on long running unit tests
* A proper unit test should always execute in less time than 1/10th
  of a second, and more likely 1/100th of a second. Having an upper
  bounds with a (ridiculously) large limit, that is still commonly
  exceeded, will ensure that out unit tests will remain proper unit
  tests and not devolve into integration tests.

  Over time we can (and should!) work to reduce this threshold
  even further.
@dkubb dkubb Update flay and reek thresholds 4e46d1c

I don't think this should be guarded by another constraint. Also 100ms is a crazy constraint. Virtus specs run in ~200ms and it's quite fast IMHO. Frankly, everything below 1second is really fast enough. I don't think that we should be bothered by such metric. Also, there are cases where a spec suite would run slower because some arbitrary process in the system took some resources and slowed things down. I don't want to see failures bkz of that.

I think this is really unnecessary. I don't want to waste my time tweaking my spec suite so that it's running 100ms faster or even 500 ms faster. It's amount of time I don't notice, I don't care about this at all.

When I see my unit spec suite running for longer than, let's say 1-2 seconds it will be alarming that I either do something wrong or my library does too much. This doesn't change the fact that I don't want my specs to fail because of that. It's just very strict and "unfriendly".

mbj commented Apr 6, 2013

@solnic I think you got dkubb wrong this change is a timeout per spec / example, not for the full suite.

@dkubb I think it is an interesting change. But maybe we should measure the time the CPU spend on the task instead of realtime? This results in far less false positives.


@dkubb I'm fine with that limit per unit test. It makes sense to limit the time a single unit test should take. I agree that such a limit would most probably only be violated if the actual test is in fact an integration test, and we should catch those (and rewrite or move them elsewhere). I also agree with @mbj that we should probably measure CPU time as compared to realtime.


@solnic this timeout is per-example. It would only catch an example that was obviously doing too much work to be called a unit test.

@mbj do you have any ideas on how we could do a timeout based on CPU time? If we use something like benchmark we can get the actual time, but only after the spec finishes. With Timeout we can kill the spec before it takes too long.


There doesn't appear to be any strong objections to this so I'm going to merge it in. If this creates any problems we can always revert it, or tweak the implementation.

@dkubb dkubb merged commit c078c5b into master Apr 12, 2013

1 check passed

Details default The Travis build passed
@dkubb dkubb deleted the unit-spec-timeout branch Apr 12, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment