Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Should timing badges take into account beforeEach/afterEach? #419

Open
joliss opened this Issue · 17 comments
@joliss

My beforeEach functions are pretty slow, and sometimes even dominate test time. (I set up test fixture data and initialize an Ember app.) But the little timing badges that show up in the browser (above 50ms or so) do not include this time, they only seem to measure the time of the test functions themselves.

I'm thinking that it would be sensible for the badges to include the beforeEach/afterEach time.

(Perhaps there are also other parts where these timings are used, like timeouts -- I'm fairly new to mocha.)

What do you think?

@tj
Owner
tj commented

hmm tough call, maybe as a flag or something, I wouldn't want to see them all the time personally but yeah I see what you're saying

@lightsofapollo

I was just about to file a bug about this.

[snip]

✔ 268 tests complete (1116ms)

make test 0.96s user 0.11s system 8% cpu 13.048 total


Personally I think the beforeEach/etc.. timing details are more important then the assertions.
Typically I put the most time intensive stuff in my beforeEach (setting up the fixture, dom, whatever)
and then run my assertions against that which takes virtually no time by comparison.

@domenic

+1

@emilecantin

You could just setup a helper function that does all your setup, and then call it inside your test. This way, you explicitely test your setup, and it would be included in the time measurements.

@andihit

+1 from me

@emilecantin: well, that makes the beforeEach function obsolete ;)

@emilecantin

@andihit: Not really, before / beforeEach is for setup that is outside of the scope of your test case. It aims to provide an environment in which you can then run your test. If your setup is relevant to the success / failure / performance of your test, then it belongs IN the test...

@andihit

If your setup is relevant to the success / failure / performance of your test, then it belongs IN the test...

In what case is the setup not relevant for the success/failure of the test?

Typically I put the most time intensive stuff in my beforeEach (setting up the fixture, dom, whatever)
and then run my assertions against that which takes virtually no time by comparison.

Same here ;)

@emilecantin

In what case is the setup not relevant for the success/failure of the test?

For example, when I am testing an object's methods, the instantiation of said object is not relevant; it's not what I test. Instantiation is part of another, unrelated test.

@andihit

Technically it is relevant that your object is initialized when you test methods of it.
But I get what you mean, it's not what you test in a particular test.

Maybe we want a second badge of each test which shows the duration of the beforeEach (and afterEach)? Or show detailed information on hover (before, beforeEach, test, afterEach, after durations)?

@emilecantin

I think detailed information on hover would be a nice addition. While I don't think beforeEach / afterEach timings shouldn't be included in the test's timings, there's no reason to not measure it. On the command line, we could add a '--detailed' flag, or an alternate reporter.

@tj
Owner
tj commented

@emilecantin yeah i agree, i think a reporter would be the cleanest route, dot and some of the others wouldn't really work, or maybe just a --verbose variant of spec and list

@jaketrent

+1 If not included in test timing, just a separate timing log for the before/after functions.

@tj
Owner
tj commented

in #700 this was brought up again, I think for now maybe aggregating them would be safest. This gets a little tricky since multiple beforEach's can be used even at the same level so describing them in the output would be very verbose unless it's opt-in

@danielstjules
Collaborator

Looks like this was never completed:

$ cat example.js
beforeEach(function(done) {
  setTimeout(done, 180);
});

it('foo', function(done) {
  setTimeout(done, 100);
});

it('bar', function(done) {
  done();
});

$ ./bin/mocha --reporter spec example.js

  ✓ foo (101ms)
  ✓ bar

  2 passing (468ms)

I could come up with a PR for this. The resulting output would be:

$ ./bin/mocha --reporter spec example.js

  ✓ foo (281ms)
  ✓ bar (181ms)

  2 passing (468ms)

$ ./bin/mocha --reporter --verbose spec example.js

    beforeEach (181ms)
  ✓ foo (281ms)
    beforeEach (181ms)
  ✓ bar (181ms)

  2 passing (468ms)

To clarify, I'd probably keep the current behavior of having timeouts apply to runnables, and not aggregated test execution time.

@dasilvacontin
Collaborator

@danielstjules It would be also be nice to show the name of the hook, for when you have multiple ones.

@jbnicolai
Owner

@danielstjules go for it! :)

@kevinburkeshyp

@danielstjules would love this!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.