Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Support file based modularity #49

Closed
Raynos opened this Issue · 9 comments

4 participants

@Raynos

The workflow is

  • most of the time when I save a test file I want you to only run that test file.
  • sometimes when I save a test file I want to run the entire test suite.

Would be nice to have a clever file watcher that can run a sub set of my test suite.

@Raynos

What about the following

If you save a specific test file then run that unit test only in a new tab (make it the first tab) and run the entire test suite in a second tab.

If you save another specific test file kill the tab containing only the previous test file and create a new first tab for this file.

This way every time you save a file you get the results for that file immediately and you have the entire suite running in the background.

When your iterating very quickly you don't have to wait on the entire suite. When you working more slowly the entire suite finishes naturally and you can catch any regressions.

@Raynos

This workflow works nicely for the "testing only one launcher" and may work less nicely if your testing 5 launchers in which case you will quickly end with tab clutter.

Maybe have the first file only tab for the "main launcher". i.e. you only care about the test results for this file in Chrome / node / phantom and can tolerate waiting for the entire regression test results for all other browsers.

@davemo

I believe mocha supports this level of granularity using the describe.only inside the test blocks. I think testem is not the right place for something like this to be supported; instead we should submit PR's to testing frameworks that don't have logic in place to limit tests to an exclusive set when running a suite.

@Raynos

Disagree. Testing frameworks should not have hacks like describe.only

Decent testing frameworks allow you to run a subset of your suite by running individual files or a subset of your files.

@airportyh
Owner

@Raynos I think you should at least give describe.only (and the like) a try first, you may like it.

However, I am personally running into situations where I find myself switching between testem configurations because I am running ruby tests sometimes and JS tests other times, and although switching is not frequent, would be nice if eliminated. Plus, since rspec doesn't support describe.only (maybe I'll add it though), I find myself using different configs for running different sets of ruby tests - that's an anti-pattern.

Now, I could go around trying to add describe.only to every test framework in the world and try to convince people to use it, or deal with that fact that some frameworks just won't have this feature.

So, I am thinking maybe the configuration file can do something like watchr's DSL https://github.com/mynyml/watchr, where I can kick off different launchers depending on the file that was last saved. Would that work?

@Raynos

@airportyh that's interesting. Fire a different launcher based off the file.

I actually want to be able to run multiple launchers, one for everything and one for a specific file.

And of course when I save a code file instead of a test file I want to re-run the last launcher for the lastly saved test file.

@snahider

I run different kind of browser tests (feature, unit, integration) switching between testem files.

Would be nice to have the option to configure different browser launchers in the testem file and select specific files to each launcher.


launchers:

FeatureTests:
protocol: browser
test_page: spec/feature_runner.html

UnitTests:
protocol: browser
src_files:
- spec/foo_spec.js

launch_in_dev:
FeatureTests
UnitTests


@airportyh
Owner

@Raynos seems like you can easily have multiple "watcher rules" that get matched on the same file-save, thus kicking off two different launchers: one for the specific case; another for all tests. I think autotest in ruby did something like what you want: run the specific tests first, and then all the rest.

@snahider yeah, that was on my mind as well - currently there isn't a way to run different sets of tests in a different tab.

I am going to sketch out some ideas of what that configuration file could look like - hopefully something not nightmarish.

@Raynos

I implemented file based modularity myself with testem-node file={{file}}

@Raynos Raynos closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.