NOTE: Although TestSwarm is in use in some very large environments already, it is still in a relatively early stage. Changes in the software may or may not maintain backwards compatibility. Please keep this in mind when using the TestSwarm software.
The ultimate result of TestSwarm are the project pages and job pages.
It shows source control commits (going vertically) by browser (going horizontally). 'Green' indicates the runs were are 100% passing, 'Red' indicates a failure, and 'Grey' means the runs are scheduled awaiting run
For more details on the individual jobs in a project, click the job title in the first column.
This shows all individual runs of the job (going vertically) by browser. To view the run results of a completed run, click the "run results" icon inside the colored cell.
From top to bottom, the structure is as follows
jquery:master #5ffa3ea7eea3fd9848fb5833e3306d390672a3e2" in project "
The architecture is as follows:
An important aspect of TestSwarm is its ability to proactively correct bad results coming in from clients. As any web developer knows: Browsers are surprisingly unreliable (inconsistent results, browser bugs, network issues, etc.). Here are a few of the things that TestSwarm does to try and generate reliable results:
All together these strategies help the swarm to be quite resilient to misbehaving browsers, flaky internet connections, or even poorly-written test suites.
For example if a job has a runmax of 3, and it fails the first time it will distribute it again (preferably to a different client with the same user agent, otherwise the same client will get it again later) until it either passes or hits the maximum of 3.
Selenium provides a quite-full stack of functionality. It has a test suite, a test driver, automated browser launching, and the ability to distribute test suites to many machines (using their grid functionality). There are a few important ways in which TestSwarm is different:
For a number of corporations Selenium may already suit your needs (especially if you already have a form of continuous integration set up).
There are many other browser launching tools (such as Watir) but all of them suffer from the same problems as above - and frequently with even less support for advanced features like continuous integration.
A popular alternative to the process of launching browsers and running test suites is that of running tests in headless instances of browsers (or in browser simulations, like in Rhino). All of these suffer from a critical problem: At a fundamental level you are no longer running tests in an actual browser - and the results can no longer be guaranteed to be identical to an actual browser. Unfortunately nothing can truly replace the experience of running actual code in a real browser.
Last edited by Timo Tijhof,