Permalink
Browse files

Basically, document and clean up the new version a bit. Both the code…

… and the workflow.
  • Loading branch information...
1 parent ac243de commit 53384078bc1fd0fbda15b21a4003c873729c0a90 Benjamin Thomas committed Aug 9, 2010
View
90 API.markdown
@@ -0,0 +1,90 @@
+The events that are called by `testing.runSuites` and/or `testing.runSuite` make
+it possible to write your own test runners and format the output however you'd
+like. See `runners.js` for example of how all these functions work.
+
+Events
+------
+`onStart`: called when runSuites starts running suites. This gets 1 argument:
+the number of suites being ran.
+
+`onDone`: called when runSuites finishes running the suites. This gets 2
+arguments: an array of suite results (see below), and the duration in seconds
+that it took to run all the suites.
+
+`onSuiteStart`: called when a suite is started. This gets 1 optional argument:
+the name of the suite. A suite might not have name if a suite object is passed
+to `runSuites` as opposed to a file name.
+
+`onSuiteDone`: called when a suite finishes. This gets 1 argument: the suite
+result object for the specific suite. See below.
+
+`onTestStart`: called when a test is started. This gets 1 argument: the name of
+the test.
+
+Carefull! The test runner will think errors thrown in this function belong to
+the test suite and you'll get inaccurate results. Basically, make sure you
+don't throw any errors in this listener.
+
+`onTestDone`: Called when a test finishes. This gets 1 argument, the test
+result object for the specific test. See below.
+
+Carefull! The test runner will think errors thrown in this function belong to
+the test suite and you'll get inaccurate results. Basically, make sure you
+don't throw any errors in this listener.
+
+`onPrematureExit`: called when the process exits and there are still tests that
+haven't finished. This occurs when people forget to finish their tests or their
+tests don't work like the expected. This gets 1 argument: an array of the
+names of the tests that haven't finished.
+
+Suite Result
+------------
+A suite result is an object that looks like this:
+
+ { name: suite name (if applicable)
+ , results: an array of test results for each test ran (see below)
+ , duration: how long the suite took
+ , numErrors: number of errors
+ , numFailures: number of failures
+ , numSuccesses: number of successes
+ }
+
+Note: even if a suite has many tests, the array of tests results might not have
+them all if a specific test was requested. So a Suite Result could have the
+results of 0 tests.
+
+Test Result
+-----------
+A test result is an object that looks like one of the following:
+
+success: the test completed successfully
+
+ { duration: how long the test took
+ , name: test name
+ , status: 'success'
+ , numAssertions: number of assertions
+ }
+
+failure: the test had an assertion error
+
+ { duration: how long the test took
+ , name: test name
+ , status: 'failure'
+ , failure: the assertion error
+ }
+
+error: the test had an uncaught error
+
+ { duration: how long the test took
+ , name: test name
+ , status: 'error'
+ , error: the error
+ }
+
+multiError: this result occurs when tests are ran in parallel and it isn't
+possible to accurately figure out which errors went with which tests.
+
+ { name: [testName1, testName2, testName3]
+ , status: 'multiError'
+ , errors: [err1, err2, err3]
+ }
View
305 README.markdown
@@ -1,206 +1,199 @@
node-async-testing
==================
-A simple test runner with testing asynchronous code in mind.
-
-Some goals of the project:
-
-+ Simple and intuitive. You create a test and then run the code you want to
- test and make assertions as you go along. Tests should be functions.
-+ Use the assertion module that comes with Node. If you are
- familiar with it you won't have any problems. You shouldn't have to learn
- new assertion functions.
-+ Test files to be executable by Node. No preprocessors. If your test file is
- called "my_test_file.js" then "node my_test_file.js" should run the tests.
-+ Address the issue of testing asynchronouse code. Node is asynchronous, so
- testing should be too.
+A simple test runner for testing asynchronous code
+
+Goals of the project:
+
++ Tests should just be functions. Simple and intuitive.
++ You shouldn't have to learn new assertion functions. Use the assertion module
+ that comes with Node. If you are familiar with it you won't have any problems.
++ Test files should be executable by Node. No preprocessors. If your test file
+ is called "my_test_file.js" then "node my_test_file.js" should run the tests.
++ Node is asynchronous, so testing should be too.
+ Not another Behavior Driven Development testing framework. I don't
like specifications and what not. They only add verbosity.
-
- test('X does Y',function() {
- //test goes here
- });
-
- is good enough for me.
-+ Make no assumptions about the code being tested.
++ Make no assumptions about the code being tested. You should be able to test
+ any code, and all aspects of it.
Feedback/suggestions encouraged!
+Installing
+----------
+
+To install using npm do this:
+
+ npm install async_testing
+
+To install by hand, the file `async_testing.js` needs to be in your Node path. The
+easiest place to put it is in `~/.node_libraries`. To install the command line
+script, you need to put the file `node-async-test` in your `$PATH` somewhere.
+
Writing Tests
-------------
The hard part of writing a test suite for asynchronous code is that when a test
fails, you don't know which test it was that failed. Errors won't get caught by
`try`/`catch` statements.
-This module aims to address that by making sure
-
-1. First, it gives each test its own unique assert object. That way you know
- which assertions correspond to which tests.
-2. Second, the tests get ran one at a time. That way, it is possible to add a
- global exceptionHandler for the process and catch the tests whenever
- they happen.
-
- To only run one test at a time, asynchronous tests receive a `finished()`
- function as an argument. They must call this function when they are done.
- The next test won't be run until this function is called.
-
-Tests are added to a TestSuite object.
-
- var TestSuite = require('async_testing').TestSuite;
-
- var suite = new TestSuite();
- suite.addTests({
- "simple asynchronous": function(assert, finished) {
- setTimeout(function() {
- assert.ok(true);
- finished();
- });
- }
+This module addresses that by
+
+1. giving each test its own unique assert object. This way you know
+ which assertions correspond to which tests.
+2. running (by default) the tests one at a time. This way it is possible to
+ add a global exceptionHandler for the process and catch the errors whenever
+ they happen.
+3. requiring you to tell the test runner when the test is finished. This way
+ you don't have any doubt as to whether or not an asynchronous test still
+ has code to run. (though you still have to be very careful when you finish
+ a test!)
+
+**node-async-testing** tests are just a functions:
+
+ function asynchronousTest(test) {
+ setTimeout(function() {
+ // make an assertion (these are just commonjs assertions)
+ test.ok(true);
+ // finish the test
+ test.finished();
});
+ }
-If your test isn't asynchronous, you don't have to use the finished callback.
-If you don't list the finished callback in the parameters of the test,
-node-async-testing will assume the test is synchronous.
+**node-async-testing** makes no assumptions about tests, so even if your test is
+not asynchronous you still have to finish it:
- var suite = new TestSuite();
- suite.addTests({
- "simple synchronous": function(assert) {
- assert.ok(true);
- }
- });
+ function synchronousTest(test) {
+ test.ok(true);
+ test.finished();
+ };
-You can add a setup function that is ran once at the beginning of each test.
-You can do a teardown function, as well:
-
- var suite = new TestSuite();
- suite.setup(function() {
- this.foo = 'bar';
- });
- suite.teardown(function() {
- this.foo = null;
- });
- suite.addTests({
- "synchronous foo equals bar": function(assert) {
- assert.equal('bar', this.foo);
- }
- });
-
-If you need to access the variables created in the setup function asynchronously
-your tests receive a third argument which has this information:
-
- var suite = new TestSuite();
- suite.setup(function() {
- this.foo = 'bar';
- });
- suite.teardown(function() {
- this.foo = null;
- });
- suite.addTests({
- "asynchronous foo equals bar": function(assert, finished, test) {
- process.nextTick(function() {
- assert.equal('bar', test.foo);
- finished();
+**node-async-testing** is written for running suites of tests, not individual
+tests. A test suite is just an object with tests:
+
+ var suite = {
+ asynchronousTest: function(test) {
+ setTimeout(function() {
+ // make an assertion (these are just commonjs assertions)
+ test.ok(true);
+ // finish the test
+ test.finished();
});
+ },
+ synchronousTest: function(test) {
+ test.ok(true);
+ test.finished();
}
- });
+ }
If you want to be explicit about the number of assertions run in a given test,
-you can set `numAssertionsExpected` on the test. This can be helpful in
+you can set `numAssertions` on the test. This can be very helpful in
asynchronous tests where you want to be sure all callbacks get fired.
- var suite = new TestSuite();
- suite.addTests({
- "assertions expected (fails)": function(assert) {
- this.numAssertionsExpected = 3;
+ suite['test assertions expected (fails)'] = function(test) {
+ test.numAssertions = 3;
- assert.ok(true);
- // this test will fail!
- }
- });
+ test.ok(true);
+ test.finished();
+ // this test will fail!
+ }
-If you need to make assertions about what kind of errors are thrown, you can listen
-for the uncaughtException event on the test:
+If you need to make assertions about what kind of errors are thrown, you can
+add an `uncaughtExceptionHandler`:
- var suite = new TestSuite();
- suite.addTests({
- "uncaughtException listener": function(assert, finished, test) {
- test.numAssertionsExpected = 1;
- test.addListener('uncaughtException', function(err) {
- assert.equal('hello', err.message);
- finished();
- });
+ suite['test catch sync error'] = function(test) {
+ var e = new Error();
- throw new Error('hello');
- }
- });
+ test.uncaughtExceptionHandler = function(err) {
+ test.equal(e, err);
+ test.finished();
+ }
-All the functions to a TestSuite can be chained to cut down on verbosity:
-
- (new TestSuite())
- .setup(function() {
- this.foo = 'bar';
- })
- .teardown(function() {
- this.foo = null;
- })
- .addTests({
- "foo equal bar": function(assert) {
- assert.equal('bar', foo);
- }
- });
+ throw e;
+ };
+
+All of the examples in this README can be seen in `examples/readme.js` which
+can be run with the following command:
+
+ node examples/readme.js
+
+Additionally, the you can look at the files in the `test` directory for more
+examples.
Running test suites
-------------------
-To run a test suite, you call `runTests()` on it.
+The easiest way to run a suite is with the `run` method:
- var suite = new TestSuite();
- suite.addTests({
- "simple": function(assert) {
- assert.ok(true);
- }
- });
- suite.runTests();
+ require('async-testing').run(suite);
-There is also a test runner which can run many TestSuites at once:
+The `run` command can take a test suite, a file name, a directory name or an
+array of any of those three options.
- var suites = {
- 'first suite': new TestSuite(),
- 'second suite': new TestSuite()
- };
+The recommended way to write and run test suites is by making the `exports`
+object of your module the test suite object. This way your suites can be run by
+other scripts that can do interesting things with the results. However, you
+still want to be able to run that suite via the `node` command. Here's how to
+accomplish all that:
+
+ exports['first test'] = function(test) { ... };
+ exports['second test'] = function(test) { ... };
+ exports['third test'] = function(test) { ... };
+
+ if (module === require.main) {
+ require('async_testing').run(__filename);
+ }
+
+Now, that suite can be run by calling the following from the command line (if it
+were in a file called `mySuite.js`):
- require('async_testing').runSuites(suites);
+ node mySuite.js
-It is recommended that you export your test suites, so other more capable
-scripts can handle running them. However, it is still convenient to be able to
-run a specific file. Here's how you can allow both:
+Additionally, the `run` command can be passed an array of command line arguments
+that alter how it works:
- exports['first suite'] = new TestSuite();
- exports['second suite'] = new TestSuite();
+ exports['first test'] = function(test) { ... };
+ exports['second test'] = function(test) { ... };
+ exports['third test'] = function(test) { ... };
if (module === require.main) {
- require('../async_testing').runSuites(exports);
+ require('async_testing').run(__filename, process.ARGV);
}
-This way the tests will only be run automatically if the file containing them is
-the script being ran.
+Now, you can tell the runner to run the tests in parallel:
-node-async-testing also comes with a script that will run all test files in a
-specified directory. A test file is one that matches this regular expression:
-`/^test-.*\.js$/`. To use the script make sure node-async-testing has been
-installed properly and then run:
+ node mySuite.js --parallel
- node-async-test testsDir
+Or only run a specific test:
-Installing
-----------
+ node mySuite.js --test-name "first test"
+
+Use the `help` flag to see all the options:
+
+ node mySuite.js --help
+
+**node-async-testing** also comes with a command line script that will run all
+test files in a specified directory (it recursively searches). A test file is
+one whose name begins with `test-`. To use the script, make sure
+**node-async-testing** has been installed properly and then run:
+
+ node-async-test tests-directory
+
+Or you can give it a specific test to run:
+
+ node-async-test tests-directory/mySuite.js
+
+The advantage of using the `node-async-test` command is that its exit status
+will output the number of failed tests. This way you can write shell scripts
+that do different things depending on whether or not the suite was successful.
+
+Custom Reporting
+----------------
-To install, the file `async_testing.js` needs to be in your Node path. The
-easiest place to put it is in `~/.node_libraries`.
+It is possible to write your own test runners. See `node-async-test` or
+`runners.js` for examples or `API.markdown` for a description of the different
+events and what arguments they receive.
-Notes
------
+This feature is directly inspired by Caolan McMahon's [nodeunit]. Which is
+awesome.
-+ If you don't care about being able to count the number of assertions in a given
- test, you can use any assertion library you'd like.
+[nodeunit]: http://github.com/caolan/nodeunit
View
57 api.markdown
@@ -1,57 +0,0 @@
-The events that are called by `testing.runSuites` and/or `testing.runSuite`.
-These make it possible to write your own test runners and format the output
-however you like. See `runners.js` for example of how all these functions
-work.
-
-Events
-------
-`onStart`: `function(numSuites)`
-
-`onDone`: `function(suiteResultsArray, duration)`
-
-`onSuiteStart`: `function([suiteName])` -- carefull! suite name might not exist
-
-`onSuiteDone`: `function(suiteResult)`
-
-`onTestStart`: `function(testName)` -- carefull! errors caused here will show up in the suite errors
-
-`onTestDone`: `function(testResult)` -- carefull! errors caused here will show up in the suite errors
-
-`onPrematureExit`: `function(testNamesArray)`
-
-Objects
--------
-testResult: (one of the following)
-
-multiError:
- { name: [testName1, testName2, testName3]
- , status: 'multiError'
- , errors: [err1, err2, err3]
- }
-failure:
- { duration: how long the test took
- , name: test name
- , status: 'failure'
- , failure: the assertion error
- }
-error:
- { duration: how long the test took
- , name: test name
- , status: 'error'
- , error: the error
- }
-success:
- { duration: how long the test took
- , name: test name
- , status: 'success'
- , numAssertions: number of assertions
- }
-
-suiteResult:
- { name: suite name (if applicable)
- , results: [testResult1, testResult2, ...]
- , duration: how long the suite took
- , numErrors: number of errors
- , numFailures: number of failures
- , numSuccesses: number of successes
- }
View
4 node-async-test → bin/node-async-test.js
@@ -1,10 +1,12 @@
#! /usr/bin/env node
try {
+ // always check for a local copy of async_testing first
var testing = require('./async_testing');
}
catch(err) {
if( err.message == "Cannot find module './async_testing'" ) {
+ // look in the path for async_testing
var testing = require('async_testing');
}
else {
@@ -15,6 +17,8 @@ catch(err) {
testing.run(null, process.ARGV, done);
function done(allResults) {
+ // we want to have our exit status be the number of problems
+
var problems = 0;
for(var i = 0; i < allResults.length; i++) {
View
38 examples/readme.js
@@ -0,0 +1,38 @@
+// This file contains all the examples mentioned in the readme
+
+exports['asynchronousTest'] = function(test) {
+ setTimeout(function() {
+ // make an assertion (these are just commonjs assertions)
+ test.ok(true);
+ // finish the test
+ test.finished();
+ },50);
+};
+
+exports['synchronousTest'] = function(test) {
+ test.ok(true);
+ test.finished();
+};
+
+exports['test assertions expected (fails)'] = function(test) {
+ test.numAssertions = 3;
+
+ test.ok(true);
+ test.finished();
+ // this test will fail!
+}
+
+exports['test catch sync error'] = function(test) {
+ var e = new Error();
+
+ test.uncaughtExceptionHandler = function(err) {
+ test.equal(e, err);
+ test.finished();
+ }
+
+ throw e;
+};
+
+if (module == require.main) {
+ require('../async_testing').run(__filename, process.ARGV);
+}
View
55 examples/test-readme.js
@@ -1,55 +0,0 @@
-var TestSuite = require('../async_testing').TestSuite;
-
-exports['README examples suite'] = (new TestSuite())
- .setup(function(callback) {
- this.foo = 'bar';
- process.nextTick(function() {
- callback();
- });
- })
- .teardown(function(callback) {
- this.foo = null;
-
- process.nextTick(function() {
- callback();
- });
- })
- .addTests({
- "simple asynchronous": function(assert, finished) {
- setTimeout(function() {
- assert.ok(true);
- finished();
- },50);
- },
- "simple synchronous": function(assert) {
- assert.ok(true);
- },
- "synchronous foo equal bar": function(assert) {
- assert.equal('bar', this.foo);
- },
- "asynchronous foo equal bar": function(assert, finished, test) {
- process.nextTick(function() {
- assert.equal('bar', test.foo);
- finished();
- });
- },
- "assertions expected (fails)": function(assert) {
- this.numAssertionsExpected = 3;
-
- assert.ok(true);
- // this test will fail!
- },
- "uncaughtException listener": function(assert, finished, test) {
- test.numAssertionsExpected = 1;
- test.addListener('uncaughtException', function(err) {
- assert.equal('hello', err.message);
- finished();
- });
-
- throw new Error('hello');
- }
- });
-
-if (module === require.main) {
- require('../async_testing').runSuites(exports);
-}
View
91 examples/test-suites.js
@@ -1,91 +0,0 @@
-var sys = require('sys');
-var TestSuite = require('../async_testing').TestSuite;
-
-exports['First Suite'] = new TestSuite()
- .addTests({
- "this does something": function(assert) {
- assert.ok(true);
- },
- "this doesn't fail": function(assert, finished) {
- assert.ok(true);
- setTimeout(function() {
- assert.ok(true);
- finished();
- }, 300);
- },
- "this does something else": function(assert) {
- assert.ok(true);
- assert.ok(true);
- },
- });
-
-exports['Second Suite'] = new TestSuite()
- .addTests({
- "this does something": function(assert) {
- assert.ok(true);
- },
- "this fails": function(assert, finished) {
- setTimeout(function() {
- assert.ok(false);
- finished();
- }, 300);
- },
- "this does something else": function(assert) {
- assert.ok(true);
- assert.ok(true);
- },
- "this errors": function() {
- throw new Error();
- },
- "this errors asynchronously": function(assert, finished) {
- process.nextTick(function() {
- throw new Error();
- finished();
- });
- },
- "more": function(assert) {
- assert.ok(true);
- },
- "throws": function(assert) {
- assert.throws(function() {
- throw new Error();
- });
- },
- "expected assertions": function(assert) {
- this.numAssertionsExpected = 1;
- assert.throws(function() {
- throw new Error();
- });
- },
- });
-
-exports['Setup Suite'] = (new TestSuite())
- .setup(function() {
- this.foo = 'bar';
- })
- .addTests({
- "foo equals bar": function(assert, finished, test) {
- assert.equal('bar', this.foo);
- assert.equal('bar', test.foo);
- finished();
- }
- });
-
-var count = 0;
-exports['Wait Suite'] = new TestSuite();
-exports['Wait Suite'].addTests({
- "count equal 0": function(assert, finished) {
- assert.equal(0, count);
- setTimeout(function() {
- count++;
- finished();
- }, 300);
- },
- "count equal 1": function(assert) {
- assert.equal(1, count);
- }
- });
-
-if (module === require.main) {
- require('../async_testing').runSuites(exports);
-}
View
2 index.js
@@ -0,0 +1,2 @@
+// convenience file for easily including async testing
+module.exports = require('./lib/async_testing');
View
134 async_testing.js → lib/async_testing.js
@@ -3,20 +3,22 @@ var assert = require('assert')
, fs = require('fs')
;
-var runners = require('./runners');
-exports.run = runners.def;
+// include the default test runner as the "run" function on this module
+exports.run = require('./runners').def;
+// adds the assertion functions to a test, but binds them to that particular
+// test so assertions are properly associated with the right test.
function addAssertionFunctions(test) {
- var assertionFunctions = [
- 'ok',
- 'equal',
- 'notEqual',
- 'deepEqual',
- 'notDeepEqual',
- 'strictEqual',
- 'notStrictEqual',
- 'throws',
- 'doesNotThrow'
+ var assertionFunctions =
+ [ 'ok'
+ , 'equal'
+ , 'notEqual'
+ , 'deepEqual'
+ , 'notDeepEqual'
+ , 'strictEqual'
+ , 'notStrictEqual'
+ , 'throws'
+ , 'doesNotThrow'
];
assertionFunctions.forEach(function(funcName) {
@@ -28,26 +30,38 @@ function addAssertionFunctions(test) {
catch(err) {
if (err instanceof assert.AssertionError) {
err.TEST = test;
+ //TODO: should this be moved outside of the if check
throw err;
}
}
}
});
}
+/* Runs a module of tests. Each property in the module object should be a
+ * test. A test is just a method.
+ *
+ * Available configuration options:
+ *
+ * + parallel: boolean, for whether or not the tests should be run in parallel
+ * or serially. Obviously, parallel is faster, but it doesn't give
+ * as accurate error reporting
+ * + testName: string, the name of a test to be ran
+ * + name: string, the name of the module/suite being ran
+ *
+ * Plus, there are options for the following events. These should be functions.
+ *
+ * + onSuiteStart
+ * + onSuiteDone
+ * + onTestStart
+ * + onTestDone
+ * + onPrematureExit
+ */
+//TODO rename 'obj' to 'mod'
exports.runSuite = function(obj, options) {
- // make sure options exist
+ // make sure options exists
options = options || {};
- /* available options:
- *
- * + parallel: true or false, for whether or not the tests should be run
- * in parallel or serially. Obviously, parallel is faster, but it doesn't
- * give as accurate error reporting
- * + testName: string, the name of a test to be ran
- * + name: string, the name of the module/suite being ran
- */
-
// keep track of internal state
var suite =
{ todo: []
@@ -79,16 +93,18 @@ exports.runSuite = function(obj, options) {
// add our global error listener
process.addListener('uncaughtException', errorHandler);
+ // add our exit listener to be able to notify about unfinished tests
process.addListener('exit', exitHandler);
suite.startTime = new Date();
+
// start the test chain
startNextTest();
/******** functions ********/
function startNextTest() {
- // pull off the next test
+ // grab the next test
var curTest = suite.todo.shift();
// break out of this loop if we don't have any more tests to run
@@ -106,10 +122,13 @@ exports.runSuite = function(obj, options) {
// add this function to the test object because the assert wrapper needs
// to be able to finish a test if an assertion failes
curTest.finish = testFinished;
+
// for keeping track of an uncaughtExceptionHandler
+ //TODO can we delete this line?
curTest.UEHandler = null;
+
// this is the object that the tests get for manipulating how the tests work
- curTest.obj =
+ var testObj =
// we use getters and setters for the uncaughtExceptionHandler because it
// looks nicer but we need to be able to throw an error if they are running in
// parallel
@@ -120,18 +139,24 @@ exports.runSuite = function(obj, options) {
}
curTest.UEHandler = h;
}
+ // TODO rename this to finish
, finished: function() { curTest.finish(); }
};
+ // store the testObj in the test
+ curTest.obj = testObj;
+
// notify listeners
if (options.onTestStart) {
options.onTestStart(curTest.name);
}
+ //TODO can we move this above notifying the listeners?
addAssertionFunctions(curTest);
try {
// actually call the test
+ //TODO should we call this on the test object? I think not!
curTest.func.call(curTest.obj, curTest.obj);
}
catch(err) {
@@ -155,9 +180,9 @@ exports.runSuite = function(obj, options) {
if (failure) {
this.failure = failure;
}
- // otherwise, if they specified the number of assertions, let's make sure
- // they match up
+ // otherwise
else {
+ // if they specified the number of assertions, make sure they match up
if (this.obj.numAssertions && this.obj.numAssertions != this.numAssertions) {
this.failure = new assert.AssertionError(
{ message: 'Wrong number of assertions: ' + this.obj.numAssertions +
@@ -185,14 +210,15 @@ exports.runSuite = function(obj, options) {
}
// mark that this test has completed
- var formattedTest = formatTestResult(this);
- suite.results.push(formattedTest);
+ var report = createTestReport(this);
+ suite.results.push(report);
// check to see if we can isolate any errors
checkErrors();
if (options.onTestDone) {
- options.onTestDone(formattedTest);
+ // notify listener
+ options.onTestDone(report);
}
// check to see if we are all done
@@ -206,8 +232,7 @@ exports.runSuite = function(obj, options) {
}
}
- // listens for uncaught errors and keeps track of which tests they could
- // be from
+ // listens for uncaught errors and keeps track of which tests they could be from
function errorHandler(err) {
// assertions throw an error, but we can't just catch those errors, because
// then the rest of the test will run. So, we don't catch it and it ends up
@@ -221,6 +246,7 @@ exports.runSuite = function(obj, options) {
// We want to allow tests to supply a function for handling uncaught errors,
// and since all uncaught errors come here, this is where we have to handle
// them.
+ // (you can only handle uncaught errors when not in parallel mode
if (!options.parallel && suite.started[0].UEHandler) {
try {
// run the UncaughtExceptionHandler
@@ -234,6 +260,7 @@ exports.runSuite = function(obj, options) {
catch(e) {
// if the UncaughtExceptionHandler raises an Error we have to make sure
// it is handled.
+ //TODO can we just call errorHandle here?
// The error raised could be an AssertionError, in that case we don't
// want to raise an error for that (see the above comment)...
@@ -279,6 +306,7 @@ exports.runSuite = function(obj, options) {
// any time a test finishes, we could learn more about errors that had
// multiple candidates, so loop through and see if anything has changed
for(var i = 0; i < suite.results.length; i++) {
+ // if there is only one candidate then we can finish that test and report it
if (suite.results[i].candidates && suite.results[i].candidates.length == 1) {
// get the test
var test = suite.results[i].candidates[0];
@@ -297,17 +325,19 @@ exports.runSuite = function(obj, options) {
delete test.startTime;
delete test.error.endTime;
- // store the formatted result
- suite.results[i] = formatTestResult(test);
+ // store the report for the test
+ suite.results[i] = createTestReport(test);
if (options.onTestDone) {
+ // notify listener
options.onTestDone(suite.results[i]);
}
}
}
}
- function formatTestResult(result) {
+ // makes a nice tidy object with the results of a test, a 'result' so to speak
+ function createTestReport(result) {
if (result.constructor == Array) {
return {
name: result[0].map(function(t) { return t.name; })
@@ -401,7 +431,14 @@ exports.runSuite = function(obj, options) {
}
}
- // this isn't as efficient as it could be. Basically, we
+ /* This isn't as efficient as it could be. Basically, this is where we
+ * analyze all the tests that finished in error and have multiple test
+ * candidates. Then we generate reports for those errors.
+ * Right now it is really dumb, it just lumps all of these tests into one
+ * multierror report. However it could be smarter than that.
+ *
+ * TODO make this smarter!
+ */
function groupMultiErrors(results) {
var multiErrors = [];
for(var i = 0; i < results.length; i++) {
@@ -424,16 +461,37 @@ exports.runSuite = function(obj, options) {
}
}
- var formatted = formatTestResult(r);
+ var report = createTestReport(r);
if (options.onTestDone) {
- options.onTestDone(formatted);
+ options.onTestDone(report);
}
- results.push(formatted);
+ results.push(report);
}
}
}
+/* runSuites runs an array, where each element in the array can be a filename or
+ * a commonjs module.
+ *
+ * Available configuration options:
+ *
+ * + parallel: boolean, for whether or not the tests should be run in parallel
+ * or serially. Obviously, parallel is faster, but it doesn't give
+ * as accurate error reporting
+ * + testName: string, the name of a test to be ran
+ * + suiteName: string, the name of a suite to be ran
+ *
+ * Plus, there are options for the following events. These should be functions.
+ *
+ * + onStart
+ * + onDone
+ * + onSuiteStart
+ * + onSuiteDone
+ * + onTestStart
+ * + onTestDone
+ * + onPrematureExit
+ */
exports.runSuites = function(list, options) {
// make sure options exist
options = options || {};
View
48 runners.js → lib/runners.js
@@ -4,13 +4,20 @@ var sys = require('sys')
var testing = require('./async_testing');
-var red = function(str){return "\033[31m" + str + "\033[39m"}
- , yellow = function(str){return "\033[33m" + str + "\033[39m"}
- , green = function(str){return "\033[32m" + str + "\033[39m"}
- , bold = function(str){return "\033[1m" + str + "\033[22m"}
- ;
-
+/* The defualt test runner
+ *
+ * list: an array of filename or commonjs modules to be run, or a commonjs module
+ * options: options for running the suites
+ * args: command line arguments to override/augment the options
+ * callback: a function to be called then the suites are finished
+ */
exports.def = function(list, options, args, callback) {
+ var red = function(str){return "\033[31m" + str + "\033[39m"}
+ , yellow = function(str){return "\033[33m" + str + "\033[39m"}
+ , green = function(str){return "\033[32m" + str + "\033[39m"}
+ , bold = function(str){return "\033[1m" + str + "\033[22m"}
+ ;
+
// make sure options exist
if (typeof options == 'undefined' || options == null) {
options = {};
@@ -27,9 +34,14 @@ exports.def = function(list, options, args, callback) {
}
}
+ // list needs to be an array
if (!list) {
list = [];
}
+ else if (list.constructor != Array) {
+ // if it isn't an array, a module was passed in directly to be ran
+ list = [list];
+ }
for(var i = 2; i < args.length; i++) {
switch(args[i]) {
@@ -42,8 +54,8 @@ exports.def = function(list, options, args, callback) {
case "-p":
options.parallel = true;
break;
- case "--serial":
- case "-s":
+ case "--consecutive":
+ case "-c":
options.parallel = false;
break;
case "--test-name":
@@ -65,17 +77,27 @@ exports.def = function(list, options, args, callback) {
}
}
+ // if we have no items in this list, use the current dir
if( list.length < 1 ) {
list = ['.'];
}
+ // clean up list
+ for(var i = 0; i < list.length; i++) {
+ // if it is a filename and the filename starts with the current directory
+ // then remove that so the results are more succinct
+ if (typeof list[i] === 'string' && list[i].indexOf(process.cwd()) === 0 && list[i].length > (process.cwd().length+1)) {
+ list[i] = list[i].replace(process.cwd()+'/', '');
+ }
+ }
+
if (options.help) {
sys.puts('Flags:');
- sys.puts(' --log, -l: the log level: 0, 1, 2');
- sys.puts(' --parallel, -p: run the suites in parallel mode');
- sys.puts(' --serial, -s: run the suites in serial mode');
+ sys.puts(' --log, -l: the log level: 0 => succinct, 1 => default, 2 => full stack traces');
sys.puts(' --test-name, -t: to search for a specific test');
- sys.puts(' --suite-name, -n: to search for a specific suite ');
+ sys.puts(' --suite-name, -s: to search for a specific suite ');
+ sys.puts(' --parallel, -p: run the suites in parallel mode');
+ sys.puts(' --consecutive, -c: don\'t run the suites in parallel mode');
sys.puts(' --help, -h: this help message');
process.exit();
}
@@ -199,7 +221,7 @@ exports.def = function(list, options, args, callback) {
}
}
else {
- sys.print(' '+green('OK: ')+total+' tests.');
+ sys.print(' '+green('OK: ')+total+' test'+(total == 1 ? '' : 's')+'.');
}
sys.puts(' '+(suiteResults.duration/1000)+' seconds.');
View
2 test/test-all_passing.js
@@ -17,5 +17,5 @@ exports['test D'] = function(test) {
};
if (module == require.main) {
- require('../async_testing').run(exports, process.ARGV);
+ require('../lib/async_testing').run(__filename, process.ARGV);
}
View
2 test/test-async_assertions.js
@@ -41,5 +41,5 @@ exports['test fail - too many -- numAssertionsExpected'] = function(test) {
};
if (module == require.main) {
- require('../async_testing').run(exports, process.ARGV);
+ require('../lib/async_testing').run(__filename, process.ARGV);
}
View
2 test/test-errors.js
@@ -9,5 +9,5 @@ exports['test async error'] = function(test) {
};
if (module == require.main) {
- require('../async_testing').run(exports, process.ARGV);
+ require('../lib/async_testing').run(__filename, process.ARGV);
}
View
2 test/test-multiple_errors.js
@@ -14,5 +14,5 @@ exports['test async error 2'] = function(test) {
};
if (module == require.main) {
- require('../async_testing').run(exports, process.ARGV);
+ require('../lib/async_testing').run(__filename, process.ARGV);
}
View
2 test/test-sync_assertions.js
@@ -29,5 +29,5 @@ exports['test fail - too many -- numAssertionsExpected'] = function(test) {
};
if (module == require.main) {
- require('../async_testing').run(exports, process.ARGV);
+ require('../lib/async_testing').run(__filename, process.ARGV);
}
View
2 test/test-uncaught_exception_handlers.js
@@ -114,5 +114,5 @@ exports['test async error error again async'] = function(test) {
};
if (module == require.main) {
- require('../async_testing').run(exports, process.ARGV);
+ require('../lib/async_testing').run(__filename, process.ARGV);
}

0 comments on commit 5338407

Please sign in to comment.