Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Tests #1067

Closed
claviska opened this issue Dec 9, 2022 · 20 comments
Closed

Improve Tests #1067

claviska opened this issue Dec 9, 2022 · 20 comments
Labels
testing Everything related to testing.

Comments

@claviska
Copy link
Member

claviska commented Dec 9, 2022

Any help with tests are appreciated! Things that still need to be done:

  • Audit existing tests to identify which components need more tests, which tests need to be improved, and which ones are good to go
  • Improve consistency of existing tests
    • A lot of contributors have pitched in to help with testing. Unfortunately, this introduced multiple ways to do the same thing. It would be nice to refactor them to improve consistency and maybe even document the preferred way to test various things in docs/resources/contributing.md

I don't care about the number of tests or even coverage — I care about sensibility and making sure the important things are covered.

If you plan on assisting with this, please post below so we can coordinate. Thanks!

@claviska claviska added help wanted Ready for a contributor to tackle. testing Everything related to testing. labels Dec 9, 2022
@claviska claviska added this to the 2.0 (stable) milestone Dec 9, 2022
@claviska claviska mentioned this issue Dec 9, 2022
@dhellgartner
Copy link
Contributor

Hello,
I guess I am joining a bit late; I just noticed that this was switched from the old write tests issue.

I am currently trying to write tests for sl-tab-group (see this commit) but that still is in the status of needing more tests before being finished.

Concerning the general testing issue:

  • I agree that the consistency is a bit lacking
  • I would like to help on this issue, but to manage expectations, I probably cannot do too much in the next time period. I fear I cannot much improve my current commit frequency/time investment.

making sure the important things are covered.

The question is what is important. If I try to write tests, I try to do the following in decreasing importance:

  • Make sure the component renders at all
  • Make sure the component is accessible
  • Make sure every feature (property, events, method) mentioned in the documentation works as promised
  • Try to cover any edge case I happen to notice when looking at the code

I am fully open to other suggestions.

This being a component library, one might also think about screenshot testing but I fear that is another can of worms which I would not recommend touching here.

@claviska
Copy link
Member Author

I would like to help on this issue, but to manage expectations, I probably cannot do too much in the next time period.

Thanks, and no worries. We should probably focus on one test file first, then derive and document conventions and best practices from that before jumping into others. A good place for this might be docs/resources/contributing.md. I'm happy to tackle this once we evaluate and update a complete test file.

The question is what is important. If I try to write tests, I try to do the following in decreasing importance:

Make sure the component renders at all
Make sure the component is accessible
Make sure every feature (property, events, method) mentioned in the documentation works as promised
Try to cover any edge case I happen to notice when looking at the code

This sounds like a good plan!

Re: mocking — is there a reason why you're mocking IntersectionObserver? This shouldn't be necessary since the tests are running in real browsers.

Another thing that's probably very opinionated, but also very helpful for me and other contributors who jump into tests is not reusing fixtures. In other words, instead of doing this:

const createTabGroup = async (): Promise<SlTabGroup> => {
  return fixture<SlTabGroup>(html`
    <sl-tab-group>
      <sl-tab slot="nav" panel="general" data-testid="general-tab-header">General</sl-tab>
      <sl-tab slot="nav" panel="custom" data-testid="custom-tab-header">Custom</sl-tab>
      <sl-tab slot="nav" panel="advanced" data-testid="advanced-tab-header">Advanced</sl-tab>
      <sl-tab slot="nav" panel="disabled" disabled data-testid="disabled-tab-header">Disabled</sl-tab>
      <sl-tab-panel name="general" data-testid="general-tab-content">This is the general tab panel.</sl-tab-panel>
      <sl-tab-panel name="custom" data-testid="custom-tab-content">This is the custom tab panel.</sl-tab-panel>
      <sl-tab-panel name="advanced" data-testid="advanced-tab-content">This is the advanced tab panel.</sl-tab-panel>
      <sl-tab-panel name="disabled" data-testid="disabled-tab-content">This is a disabled tab panel.</sl-tab-panel>
    </sl-tab-group>
  `);
};

Can we recreate the fixture in every test, but reduce them to the minimal code necessary to faciliate the test?

it('does something', async () => {
  const tabGroup = await fixture<SlTabGroup>(html`
    <sl-tab-group>
      <sl-tab slot="nav" panel="general">General</sl-tab>
      <sl-tab slot="nav" panel="custom">Custom</sl-tab>
      <sl-tab slot="nav" panel="advanced">Advanced</sl-tab>
      <sl-tab-panel name="general">This is the general tab panel.</sl-tab-panel>
      <sl-tab-panel name="custom">This is the custom tab panel.</sl-tab-panel>
      <sl-tab-panel name="advanced">This is the advanced tab panel.</sl-tab-panel>
    </sl-tab-group>
  `);

  // ...
});

Yes, this creates more boilerplate, but it allows us to reduce each test case and update them independently as the project evolves without having to worry about breaking other tests as we add new things or make changes to existing fixtures.

@dhellgartner
Copy link
Contributor

Re: mocking — is there a reason why you're mocking IntersectionObserver? This shouldn't be necessary since the tests are running in real browsers.

The honest answer is: Because it was required to make the test work.
Thank you for pinging me again on this. I did some further investigation and found that playwrightLauncher({ product: 'chromium'}) launches playwright in headless mode by default. So actually this is not running in a real browser but in a headless browser. Thus observers are not triggered, including the IntersectionObserver and the ResizeObserver. Also the css is not fully evaluated.

This can be fixed by updating the configuration to

    playwrightLauncher({ product: 'chromium', launchOptions: { headless: false } }),
    playwrightLauncher({ product: 'firefox', launchOptions: { headless: false } }),
    playwrightLauncher({ product: 'webkit', launchOptions: { headless: false } })

but that has another downside. If you are running the test in headed mode, all the browsers really open. This blocks your PC during test execution and, if observed for the first time, is a bit scary. Apart from that: It takes time and resources to run and I am not sure whether the CI pipeline actually can do it. The screenshot shows how it looks on my machine
headedTestRun
On the pro side, you can remove the mocked observers and the test still works.

Another thing that's probably very opinionated, but also very helpful for me and other contributors who jump into tests is not reusing fixtures.

This is fully fine for me. I have rewritten the test to not to reuse the fixtures, see 44c7c73840af5dce84a167ae95235bb729b2ecc3

@claviska
Copy link
Member Author

Thanks for clarifying. In that case, do you think it makes sense to split out the mocks so they're reusable for other (future) tests? At a glance, I don't see anything specific to tab group in the class, but it's definitely possible I'm overlooking something.

@dhellgartner
Copy link
Contributor

It took me some time but I managed to extract it (see this commit). Hope the positioning of the files suits you.

I also managed to add the first test on scrolling.

@claviska
Copy link
Member Author

claviska commented Jan 3, 2023

It took me some time but I managed to extract it (see this commit). Hope the positioning of the files suits you.

This looks great, thanks!

I also managed to add the first test on scrolling.

Amazing! 🎉

@KonnorRogers
Copy link
Collaborator

Just kinda sniping in, but I noticed a number of issues similar about IO events not firing, does setting an initial window size do anything in headless mode?

Example:

    playwrightLauncher({
      launchOptions: {
        headless: true,
        args: ['--window-size=1920,1080'],
      },
    }),

However the above is really only for chromium, I wonder if setting a default viewport size using await setViewport() would have the same effect.

import { setViewport } from '@web/test-runner-commands';

describe('my component', () => {
  it('works on 360x640', async () => {
    await setViewport({ width: 360, height: 640 });
  })
})

https://modern-web.dev/docs/test-runner/commands/#viewport

@dhellgartner
Copy link
Contributor

Unfortunately this did not work. I tried both variants but still needed the mock for the resizeObserver

@dhellgartner
Copy link
Contributor

Finally managed to get rid of the mocked observers (see this commit).

The reason for it working in non headless browser beforehand was a different timing than in headless browser. Adding the correct waits helped

@claviska
Copy link
Member Author

@dhellgartner just a heads up that a number of tests were added/updated in next recently. Are you interested in opening a draft PR so we can be more mindful of your contributions and maybe save you some effort?

It would be better to coordinate than to do this in the dark, as I've been considering doing some reorg in test files this week but I'd hate for that to impact you negatively.

@dhellgartner
Copy link
Contributor

Sure: Here you go
#1128

I also merged next branch into the feature branch -- luckily no conflicts

Sorry for taking so long for completing these tests.

@claviska claviska removed this from the 2.0 (stable) milestone Jan 23, 2023
@dhellgartner
Copy link
Contributor

After the first test is finally done, how to continue?
My suggestion would be to start with the tests again, top to bottom, and try to improve test after test. After being done with a few tests, it should be possible to extract some principles to post in the contributing.md.
But I am fully open to other ideas.

@claviska
Copy link
Member Author

claviska commented Feb 6, 2023

That works for me. At a high level, we need a list of components we can "check off" as they get worked on to prevent other contributors from duplicating efforts.

If you want to update contributing.md with testing principles, conventions, etc. that will be a good place to link people to when they ask about contributing tests. I'll be sure to add bullet points as it makes sense whenever I'm working on tests myself.

My biggest ask is that we strive for consistency and document our approach. Simple > clever. I do this as much as possible with the code, but tests have been contributed from numerous sources so they're currently a mixed bag.

I really appreciate your help with this!

@dhellgartner
Copy link
Contributor

I tried to extend the tests for SlAlert (see #1189).

@dhellgartner
Copy link
Contributor

Two things:

@dhellgartner
Copy link
Contributor

Added tests to sl-animated-image: #1246

@dhellgartner
Copy link
Contributor

And a small test for sl-animation: #1274

@dhellgartner
Copy link
Contributor

And a bit of testing for sl-split-panel #1343

@dhellgartner
Copy link
Contributor

And a round of tests for qr-code #1416

Note that I fixed a small bug along the way (the background color was not passed to the qr code). If you prefer the next time that I open a bug + PR please tell me ...

I can also still scrap the PR and reopen the fixing part as a bug.

@claviska claviska removed the help wanted Ready for a contributor to tackle. label Nov 20, 2023
@claviska
Copy link
Member Author

Thanks, everyone! Now that we have a team, we'll be taking tests in house to get things cleaned up and more consistent.

I really appreciate the effort that's gone into testing thus far! 🙌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Everything related to testing.
Projects
None yet
Development

No branches or pull requests

3 participants