Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Engine unit test is too verbose #94

Closed
pmdartus opened this issue Feb 13, 2018 · 9 comments
Closed

Engine unit test is too verbose #94

pmdartus opened this issue Feb 13, 2018 · 9 comments

Comments

@pmdartus
Copy link
Member

Description

Since the merge of the snabdom refactor (#47), the engine test became extremely verbose. It produces more than 10k lines of lines of stack traces. All those logs is a real block for running the engine tests in watch mode

One of the big offenders is that the engine checks if the same tag name is used by multiple components: registerComponent

@pmdartus pmdartus changed the title Engine unit tests are too verbose Engine unit test is too verbose Feb 13, 2018
@caridy
Copy link
Contributor

caridy commented Feb 13, 2018

Yeah, we need to spend time fixing all tests to either a) slice tests more so is easy to manage different tagNames in the same test, or b) use different tagName for each tests in every test file.

Part of that work is also to do the following for each tagName:

interface HTMLElementTagNameMap {
   'x-foo1': MyComponent1
}

which indicates to the type system that creation of such tags will be bound to that class definition, and this will remove many type errors in tests.

@pmdartus
Copy link
Member Author

What about disabling the check in test mode ?

@caridy
Copy link
Contributor

caridy commented Feb 13, 2018

@pmdartus do we have such mode?

@pmdartus
Copy link
Member Author

Jest sets by default process.env.NODE_ENV to `test: jest-runtime

We can rewrite registerComponent to:

export function registerComponent(tagName: string, Ctor: ComponentConstructor) {
    if (!isUndefined(TagNameToCtor[tagName])) {
        if (TagNameToCtor[tagName] === Ctor) {
            return;
        } else if (process.env.NODE_ENV !== 'production' && process.env.NODE_ENV !== 'test') {
            // TODO: eventually we should throw, this is only needed for the tests today
            assert.logWarning(`Different component class cannot be registered to the same tagName="${tagName}".`);
        }
    }
    TagNameToCtor[tagName] = Ctor;
}

@davidturissini
Copy link
Contributor

@pmdartus I thought that we talked about having multiple modes and it was a no-go because it would get complicated really quickly. I prefer the other approach of not recycling tag names because that matches more closely how LWC will be used, especially when customElements.register is used under the hood.

@pmdartus
Copy link
Member Author

Then going down this route we need to be more restrictive and throw if 2 tests are using the same name. A warning is not enough the prevent new test to reuse existing names.

My main concern is that we will see an explosion of components with convoluted names, and I don't think it will be maintainable in the long run.

That being said we still have a couple of options to avoid adding another mode:

  • call jest.resetModules between each test to make sure we start again with a clean slate. It's something that I use in the performance timing tests to ensure to reset the uid between each test. One of the unpleasant side-effect of this approach is that it will slow down the test suite because it forces node re-evaluating the modules.
  • expose an internal flushRegistry method on def.ts and call it between each test. This method will get tree-shacked when bundling the engine.

After listing the different options we have, I still believe that using process.env.NODE_ENV is the simplest and most reliable approach. But it's my 2 cents comment.

@davidturissini
Copy link
Contributor

@pmdartus Could we run the tests in production mode as opposed to dev mode?

@pmdartus
Copy link
Member Author

pmdartus commented Feb 13, 2018

Today the tests run in test mode. We don't do any special handling for this mode since we only check if production.

I think it's useful to run the tests in development to validate the assertions and warnings we put in place for development mode. But maybe it worth investigating running the tests also in production mode.


Edit:

React runs 4 suites of tests:

  • Test from source in development mode
  • Test from source in prod mode
  • Test from build artifact (react.js) in development mode
  • Test from build artifact (react.js) in prod mode

@pmdartus pmdartus mentioned this issue Feb 14, 2018
11 tasks
@pmdartus
Copy link
Member Author

pmdartus commented Mar 8, 2018

Here is branch with all the verbose unit test skipped: https://github.com/salesforce/lwc/compare/pmdartus/localize-all-warnings

That would be useful for the willing to tackle this issue.

pmdartus added a commit that referenced this issue May 31, 2018
## Details

This PR changes the way jest is configured in the mono-repo, to leverage the `projects` feature properly. This would allow us to have a different jest config for each project.

Changes:
* Remove component registry in the engine
* Fix tests that are using `root` instead of `template`
* Fix tests with duplicate component names

> The test running will also print the package name next to the test 💃 

<img width="710" alt="screen shot 2018-05-21 at 8 41 41 am" src="https://user-images.githubusercontent.com/2567083/40316332-db609542-5cd2-11e8-9144-9085667896d4.png">

----

In upcoming a PR I want to get rid of all the remaining warnings by introducing a custom matcher that will trap the warnings and the errors (#94, #104). All the unexpected console `log`, `warn` and `error` should make the test fail. Doing this would greatly improve quality and the consistency of the warnings the engine surfaces. 

## Does this PR introduce a breaking change?

* [ ] Yes
* [X] No
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants