[Team] Improve project-wide unit testing #1178

Open
addyosmani opened this Issue Sep 4, 2013 · 5 comments

4 participants

@addyosmani
Yeoman member

So, testing and stuff 🍦

We need moar tests

Over the last year, we've focused a ton on improving the stability of yo and our core generators in addition to our feature set. Because we've been strained on time and tests, much of this has come with heavy manual testing but also a level of uncertainty of just how fragile new features and patches are when used on configurations/OS's beyond our own.

One example of where this hit as hard was the day of the 1.0 release where 3 emergency patches needed to be made we weren't able to reproduce issues many users were having. This was despite using the latest bleeding versions of our sub-projects :)

In order to be seen as a reliable tooling chain, I think we need to try improving our unit tests. TEST ALL THE THINGS.

How can we improve?

  • Identify which sub-projects require better unit testing
  • Identify what types of unit tests they would benefit from
  • Write unit tests for features ourselves
  • Encourage new contributors to try writing tests as a way to get involved in the project

This is mostly a meta-ticket to track our progress on improving unit tests and putting together a plan for who would like to work on what tests.

@SBoudrias
Yeoman member

To start, we could add tests to the bugs we had on the release day:

yeoman/yo#76 (Maybe here the methods should trigger an error if it is called with improper parameter)
yeoman/yo#75 (This one is harder as it would need to really run the generator and report errors)

@passy
Yeoman member
@addyosmani
Yeoman member

Since this issue was opened, our test coverage for various parts of the project have improved a little thanks to the efforts of @SBoudrias and a few of our contributors. I would love to encourage more unit test contributions and wonder if a short article about this on the site, could assist.

Simon, I'd be happy to write it if you might have time to point out what areas you feel heavily need tests the most right now (I'm thinking the generator system + -webapp).

@SBoudrias
Yeoman member

Well well, about unit test themselves, I'm quite happy with what we have on the Environment side - near 100% and the tests are clean and seems pretty complete.

The conflicter really need better tests and coverage as it is a core piece of our system.

There's a lot of help output logic dispersed a bit everywhere (some on the environment, some on the generator). This is something someone could work bring together in a single "mixin" module. There's OK tests for most help functionnality, but they could be put together in a file and there's probably some hole to cover there too.

On the Base generator side, it is not very good... The tests are tightly coupled and most of them can't run alone. But then again, there's a lot of more "obscure" method like invoke - who's basically doing the same thing as calling run on the environment or using the hooks. I think most of these method need a good cleanup for Generator system 1.0, so I don't know exactly how much help we can expect on this side... I wouldn't want people passing time on sections we would eventually delete or change a lot. - So, on this side I'd stay stand-by.

So, on what we can ask help, I see these two: Conflicter and help output.

@smackesey

Hi all, I made a recent contribution to the test helpers here and am currently working on a refactoring of all the generator-ember tests. Right now I'm focusing on testing only generator output, rather than the build process itself. In the course of my work, I've run into problems that apply to generators in general. I think that solutions to these problems should be implemented in the main generator repo.

Broadly, a generator can be thought of as a function that takes options/arguments as input and generates a file/directory tree as output. As the number of possible options grows, the number of tests you need to adequately cover the configuration possibilities grows as well. Different configurations are likely to require similarly structured test suites. You can see a lot of code duplication in the generator-ember tests, as they are often manually run twice, once with JS and once with the coffeescript option set. In attempting to add support for additional options (emblem, emberscript), I realized this was a problem.

Further, I found that some tests didn't really fit anywhere in the test structure. For example, it was hard to figure out a good place to put a test that checked that the grunt-contrib-compass dependency was correctly listed in the package.json when the compassBootstrap option was used.

There are three needs:

  • a system for choosing a subset of the possible configurations to test
  • a system for automatically generating the baseline tests for these configurations
  • a clear test architecture that makes it easy for contributors to write and place tests for new features

I've implemented the beginnings of solutions to each of these problems. The basic strategy I use is to work from a "spec" of generator-ember that lists things like the possible options and values for each sub-generator, dependencies among sub-generators, and dependencies of options. This spec allows you not only to get a quick birds-eye view of possible generator configurations, but also to dynamically generate a list of expected files for any configuration, as well as a basic test suite. You can see the details at this gist.

If the job's available and you think this is a good direction, I'm interested in taking an active role in writing the testing docs and integrating solutions to the problems discussed above and in the gists into the base generator repo. I think it has the potential to eliminate a lot of duplication of effort across generators.

@arthurvr arthurvr referenced this issue in yeoman/generator Jul 30, 2015
Closed

general question about tests coverage #840

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment