Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Insignificant fetch request errors causing E2E failures #2674

Closed
aaemnnosttv opened this issue Jan 22, 2021 · 6 comments
Closed

Insignificant fetch request errors causing E2E failures #2674

aaemnnosttv opened this issue Jan 22, 2021 · 6 comments
Labels
P1 Medium priority QA: Eng Requires specialized QA by an engineer Type: Bug Something isn't working
Milestone

Comments

@aaemnnosttv
Copy link
Collaborator

aaemnnosttv commented Jan 22, 2021

Bug Description

In #1770 we upgraded @wordpress/scripts to the latest which included the addition of @wordpress/jest-console which is enabled automatically for our JS unit tests.

In #842, we added @wordpress/jest-console to our Jest config for E2E, which achieved the cleaner output we were looking for but this seems to have been at the cost of stability.

This Jest preset mocks all of the main console methods (log, warn, error) and by default will fail the test if none of these are explicitly expected. This is a problem for some tests particularly where an asynchronous request fails, simply because the test completed successfully and the page navigates before the response is received. In most cases where this response is significant, we explicitly wait for it so this wouldn't be a problem here, but in others we don't and this is where some tests can become unstable and fail simply due to this unexpected console error.

A few possible solutions:

  1. Introduce a new E2E utility to record subsequent fetch requests and build an array of page.waitForResponse promises.
    Then we'd need to identify tests affected by this problem and update them to use the utility before these requests are made, and await Promise.all( responsePromises ) before a subsequent navigation is triggered.
    Ideally this kind of thing could be done for all tests in a global beforeEach/afterEach but that won't really work
  2. Remove @wordpress/jest-console from the E2E Jest config, and only add it selectively for specific tests where we want to be more strict about console logging. This would essentially undo the cleanliness to the output we added in Silence expected console errors during E2E tests #842 but would result in more stable (although less strict) tests. The console output is actually more verbose now as several lines of context are added around each console log so this is not a trivial amount of noise to re-introduce.
Example console error

JSHandle data destructured into a string here to make it readable. The You are probably offline error is from @wordpress/api-fetch when fetch fails.

  console.error
    Google Site Kit API Error method:GET datapoint:health-checks type:core identifier:site error:You are probably offline. }

      214 | 
      215 | 		// eslint-disable-next-line no-console
    > 216 | 		console[ logFunction ]( text );
          | 		^
      217 | 	} );
      218 | }
      219 | 

      at Page.<anonymous> (config/bootstrap.js:216:3)
      at Page._addConsoleMessage (../../node_modules/puppeteer/lib/Page.js:628:10)
      at Page._onConsoleAPI (../../node_modules/puppeteer/lib/Page.js:550:10)
      at CDPSession.<anonymous> (../../node_modules/puppeteer/lib/Page.js:119:57)
      at CDPSession._onMessage (../../node_modules/puppeteer/lib/Connection.js:200:12)

Do not alter or remove anything below. The following sections will be managed by moderators only.

Acceptance criteria

  • A new E2E utility should be introduced to allow for awaiting the response of an arbitrary collection of fetch requests
  • Current unstable tests which have been failing with due to inconsistent console logging/errors should be updated with this utility to wait for fetch responses before navigating
  • These E2E tests should consistently no longer fail with unexpected API errors
    • specs/plugin-activation.test.js
    • specs/dashboard/dashboard-noscript.test.js

Implementation Brief

Test Coverage

  • Update specs/plugin-activation.test.js to be more consistent in the tests between proxy and GCP scenarios by extracting common tests to shared functions to use in both describe blocks

Visual Regression Changes

  • N/A

QA Brief

  • This issue likely will not fix all known E2E instabilities, but should fix the instabilities we were seeing rather consistently, particularly with specs/plugin-activation.test.js.

Changelog entry

  • N/A
@aaemnnosttv aaemnnosttv added Type: Bug Something isn't working P1 Medium priority labels Jan 22, 2021
@aaemnnosttv aaemnnosttv self-assigned this Jan 22, 2021
@tofumatt
Copy link
Collaborator

I'm certainly inclined to go with the first option. Noise in the tests makes them harder-to-read, but also causes false positives for diagnostics, etc. This is a pain to deal with, but I think doing the more thorough job of finding places where we wait on subsequent fetch requests is the best option.

Introducing the console noise from 2 long-term is not good in my mind.

That said, right now this is causing some incorrect test failures and blocking us from merging legitimate issues.

It's possible we might want to disable @wordpress/jest-console for now until the first option presented is done.

@aaemnnosttv
Copy link
Collaborator Author

Introducing the console noise from 2 long-term is not good in my mind.

Agreed. Unfortunately, it seems we either have to choose between leaking implementation details into our tests or ignoring console logs from the client altogether. Neither option is good but we can minimize the compromises needed.

It's possible we might want to disable @wordpress/jest-console for now until the first option presented is done.

We can't simply comment out the line where we include it in the setup as that would break all of the matchers from it that we use now, so I think the first option is actually more viable to start with anyways.

I have a working POC of the first approach that I can put together and I guess we can start with that?

@tofumatt
Copy link
Collaborator

I have a working POC of the first approach that I can put together and I guess we can start with that?

Yeah, let's start with that. Right now there are several PRs I can't really review because they are almost always failing on these tests. 😒

@eclarke1 eclarke1 added this to the Sprint 41 milestone Jan 26, 2021
@eclarke1 eclarke1 removed the Next Up label Jan 26, 2021
@aaemnnosttv aaemnnosttv added the QA: Eng Requires specialized QA by an engineer label Jan 26, 2021
@aaemnnosttv
Copy link
Collaborator Author

@felixarntz this one is ready for review and has a PR which should enhance the stability of E2E significantly (at least regarding the failures we've been seeing lately).

@felixarntz
Copy link
Member

IB = CR = ✅

@felixarntz felixarntz removed their assignment Jan 26, 2021
@eugene-manuilov eugene-manuilov self-assigned this Jan 27, 2021
@eugene-manuilov
Copy link
Collaborator

QA ✔️ Looks like it works stable now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 Medium priority QA: Eng Requires specialized QA by an engineer Type: Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants