Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🏗 Surface console errors during tests as mocha failures #14336

Merged
merged 3 commits into from
Apr 4, 2018
Merged

🏗 Surface console errors during tests as mocha failures #14336

merged 3 commits into from
Apr 4, 2018

Conversation

rsimha
Copy link
Contributor

@rsimha rsimha commented Mar 30, 2018

Calls to console.info, console.log, console.warn, and console.error are suppressed during AMP tests due to the client.captureConsole: false configuration value in karma-conf.js. If not for this, the test logs will become extremely noisy. However, suppressing console.error in particular is a bad thing, since genuine runtime errors during tests can get masked.

This PR does the following:

  • Stubs calls to console.error during tests and surfaces them as mocha test errors, causing the individual test that generated the error to fail with a useful message. Other tests will continue to be run like normal.
  • Fixes / skips tests that were silently passing earlier, but now fail due to a legitimate console error from the runtime.

instructions:

If a test you wrote was skipped by this PR, unskip it and see what the console.error is all about.

  • If the error is genuine, fix the runtime code that generated the error.
  • If the error is expected during the test, use an expect block around the test step that generates the error.

Examples (UPDATED): See here.

Examples (OUTDATED):

This test will fail...

it('test with a console error', () => {
  console.error('foo');
});

... and so will this test...

it('test with an unexpected console error', () => {
  expect(() => {
    console.error('foo');
  }).to.throw('bar');
});

... but this test will pass...

it('test with an expected console error', () => {
  expect(() => {
    console.error('foo');
  }).to.throw('foo');
});

Fixes #7381
Partially addresses #14360
Tracking issue for unskipping tests #14406

@rsimha
Copy link
Contributor Author

rsimha commented Mar 31, 2018

/to @cramforce @erwinmombay @choumx

I'm still working on this, but figured I'd send it out for review in the meantime. The main change to review is at test/_init_tests.js.

The rest of the changes either fix broken tests, or skip ones that were masking console errors and silently passing before this PR.

@cramforce
Copy link
Member

Is there a pattern to "expect" an error, so that it doesn't fail the test?

@rsimha
Copy link
Contributor Author

rsimha commented Apr 2, 2018

@cramforce Yes, there is. I've updated the PR description.

@cramforce
Copy link
Member

How do others feel about skipping so many tests. I suppose it is good to get this in. Maybe we can do a one time follow up to add expectations where it makes sense.

@rsimha
Copy link
Contributor Author

rsimha commented Apr 2, 2018

Yeah, I thought about it last week as I skipped test after test (...after test). The alternative (surfacing errors as test failures and fixing all tests in one PR) seemed too onerous and error prone to attempt.

FWIW, we're skipping ~200 out of ~6000 unit tests, and ~10 out of ~400 integration tests.

Once I get this PR to pass Travis, I'll add all the folks whose tests are being skipped as reviewers for comment and see what they think.

@dreamofabear
Copy link

Tangent: Code coverage dashboard could be a carrot for fixing flaky extension tests.. and adding visual diff tests. 😄

@rsimha
Copy link
Contributor Author

rsimha commented Apr 4, 2018

@cramforce @choumx @erwinmombay Travis is green! This PR is now ready for a full review.

All the tests that are being skipped in this PR were silently passing in spite of one or more console errors being thrown by the AMP runtime.

I've assigned TODOs to whoever substantially edited the skipped tests most recently. Instructions for fixing these tests are in the PR description. In most cases, a skipped test (or a set of skipped tests) can be fixed with a small change to either the runtime code, or by wrapping a function call in a test within an expect statement. With this, our tests should become a lot less flaky.

ccing a few people for visibility and comment: @dvoytenko, @aghassemi, @lannka, @zhouyx, @cvializ, @alanorozco, @bradfrizzell

Copy link
Member

@cramforce cramforce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel the skips are too coarse grained. Could we instead add expectations for whatever error logs we need automatically for the failing tests?

submitButton.setAttribute('type', 'submit');
form.appendChild(submitButton);
// TODO(aghassemi, #14336): Fails due to console errors.
describe.skip('AmpForm Integration', () => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm scared skipping this test.

@rsimha
Copy link
Contributor Author

rsimha commented Apr 4, 2018

@cramforce I wasn't able to make the skips any more fine-grained than they are, because for many tests / test suites, the errors were being thrown in beforeEach or afterEach.

Re: automatically expecting the errors, I did try and debug the first few tests that I skipped to see where I would need to insert the expectations for the errors, and quickly realized that in order to do so, I had to dig into the source code of each component being tested. At this rate, I wouldn't be able to fix all the failing AMP tests even if I spent the next month working on nothing but this PR. I'd imagine it will take the person who wrote each test a fraction of the time, given their knowledge of how their components work.

Another thing I noticed while debugging is that a lot of the AMP tests are very flaky as they're currently written. I figured that this PR would work as a good incentive to fix our tests for good.

One option is for me to leave this PR unmerged until each component owner has had a chance to cherry-pick the first commit in this PR (the one that adds the stub for console.error), run all their tests, and fix them, or suggest fixes via a comment on this PR.

Finally, here's a visual representation of how many unit tests we're actually skipping via this PR (about 6% of the total number of unit tests). For integration tests, we're only skipping tests in 5-6 files.

Before:

After:

WDYT?

Copy link
Member

@cramforce cramforce left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with shipping this. Please report back in 4 weeks how many of the tests did get unskipped.

@rsimha
Copy link
Contributor Author

rsimha commented Apr 4, 2018

SGTM, will track this via a GH issue.

Edit: See #14406 and #14360 for more on this.

@rsimha rsimha merged commit 9a97b5b into ampproject:master Apr 4, 2018
@rsimha rsimha deleted the 2018-03-30-ConsoleError branch April 4, 2018 03:53
@dreamofabear
Copy link

I wonder if this will make our integration tests less flaky i.e. whether or not these tests were a major source of flakiness. Something to watch out for.

@rsimha
Copy link
Contributor Author

rsimha commented Apr 4, 2018

@choumx I think our integration test flakiness arises from a combination of errors that were not being caught (now fixed), and asynchronous operations that are causing tests to timeout some of the time.

Agree, it's worth keeping an eye on.

@rsimha
Copy link
Contributor Author

rsimha commented Apr 11, 2018

The changes in this PR were replaced by the changes in #14336.

Highlights:

  • We now use sinon.mock instead of sinon.stub to keep track of when console.error is called during tests.
  • We no longer throw an error during tests when console.error is detected (some tests have legitimate reasons to do so). Instead, we check for console.error in the global afterEach.
  • Tests can wrap code that legitimately calls console.error within an allowConsoleError block (see updated examples above).
  • For now, tests that call console.error without wrapping the call in allowConsoleError will print a warning immediately after the test (when run with --files) / at the end of all tests (when all tests are run).
  • Once all existing instances of console.error are fixed or wrapped in allowConsoleError, we can switch to failing tests with errors.
  • Here is an example of error messages you might see during local development

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Change tests to fail on logged errors unless they are expected
4 participants