Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A first draft of testing guidelines #169

Open
wants to merge 2 commits into
base: master
from

Conversation

Projects
None yet
4 participants
@craftninja
Copy link
Contributor

craftninja commented Mar 1, 2019

Maybe changes:

  • links to good examples
  • move to ci-cd guidelines? or does some of that content move here? how do we differentiate between these to guideline pages?

What is this?

Hello friends!

This is a first draft of testing guidelines. I started using the conversation in #157

obligatory gif


Here are some things to consider as you develop your package.

Some useful tools:

This comment has been minimized.

@ljharb

ljharb Mar 1, 2019

Contributor

It’d be good to clarify browser/engine support of each, paired with a statement higher up that it’s important to test in all supported environments

This comment has been minimized.

@craftninja

craftninja Mar 1, 2019

Author Contributor

@ljharb I added something about making sure things work in all supported environments, do you think that is clear enough? Or do we want to go into details about how?

Testing is useful for developing new features, refactoring with confidence, and making sure new things don't break old things. When other people rely on your code for their own applications, testing helps you make sure things don't break for more than just yourself!

Testing is even more important when new maintainers take over a package. Sometimes when we work on a codebase for a long time we forget to articulate all the cognitive load to which we have become accustomed. Good test coverage can alleviate this burden.

This comment has been minimized.

@Eomm

Eomm Mar 1, 2019

Member

Could be useful to add also a line for:

  • unit test: test your code
  • integration test: test your code with other applications dependencies
  • acceptance test: test your application sticks in performance, heavy load, etc..

This comment has been minimized.

@ljharb

ljharb Mar 1, 2019

Contributor

I’d be careful about unit vs integration; unit tests can also integrate across all dependencies.

This comment has been minimized.

@craftninja

craftninja Mar 1, 2019

Author Contributor

@Eomm Added your bullet points to the How section, see if that flow looks good for you.
@ljharb - Do you think something needs to be reworded or added to reflect this concern?

This comment has been minimized.

@ljharb

ljharb Mar 2, 2019

Contributor

I would avoid categorizing types of tests at all.

This comment has been minimized.

@Eomm

Eomm Mar 2, 2019

Member

@ljharb When I said "dependancies" I was meaning a DB or whatever outside of the module domain. I think that this concept let programmers design better their module because if they write all global vars (for example) they will have pain also to write some unit-test. For this example, I think that there is still confusion (for what I see in my daily job) and focus on "target" words could unlock a new design-thinking. Of course, a BIG package doesn't need this information, but when I try to figure out who will read the docs we are writing, I'm thinking of a young dev that wants to write the "perfect module" and I would like to know that exists multiple types of test so I can go deeper ✌
What do you think of this vision?

This comment has been minimized.

@ljharb

ljharb Mar 3, 2019

Contributor

I think that it’s such a complex subject that we shouldn’t delve into it.

What you mean isn’t “dependencies” (which in this context are the things npm install installs) but low-level i/o - the network, the filesystem, etc. Certainly your tests can either mock these out or exercise them. However, the common case is to mock out low-level i/o and otherwise unit test (by which i mean, fully integration test all of your code, except for that low-level i/o) all of your code’s observable API and semantics.

@mhdawson

This comment has been minimized.

Copy link
Member

mhdawson commented Mar 5, 2019

It would be great if we could include advice that would help the tests be run in tools like CITGM. This might include using a common target and ensuring that the result is easy to identify as passed or failed.

@mhdawson

This comment has been minimized.

Copy link
Member

mhdawson commented Mar 5, 2019

Some info on how to generate/track coverage might also be useful (@bcoe FYI)

@mhdawson

This comment has been minimized.

Copy link
Member

mhdawson commented Mar 5, 2019

Agree versus the potential overalp with the CI/CD we might start with cross/references but I do think it is important that we have a 'template' whether that be suggestions or templates that can be re-used. Maybe even a script which adds the recommended target to the package.json and any other concreate artifacts that can be generated. The closer we can get to adding what's needed (in the simple case) versus recommeding what to do the better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.