Provide exportable test results in JSON format #214

wants to merge 6 commits into


None yet

4 participants


I came up with a more generic format for results, that is both easy to parse and clean. This is how it looks:

    name: 'Sute Name',
    failed: 1,
    passed: 58,
    total: 59,
    duration: 57,
    modules: [{
        name: 'Module Name',
        duration: 37,
        failed: 1,
        passed: 58,
        total: 59,
        tests: [{
            name: 'Test Name',
            duration: 1,
            failed: 1,
            passed: 10,
            total: 11,
            assertions: [{
                actual: 6,
                expected: 7,
                message: 'Looping over an array',
                result: 0,
                source: '([object Object])@http://.../tests/Utils.html:109'

And it includes all results, both successful assertions and failures.

@Krinkle Krinkle commented on an outdated diff Jul 11, 2012
@@ -238,4 +241,4 @@ CREATE TABLE `runresults` (
CREATE INDEX idx_runresults_run_client ON runresults (run_id, client_id);
ALTER TABLE runresults
- ADD CONSTRAINT fk_runresults_client_id FOREIGN KEY (client_id) REFERENCES clients (id);
+ ADD CONSTRAINT fk_runresults_client_id FOREIGN KEY (client_id) REFERENCES clients (id);
Krinkle Jul 11, 2012

(restore the new line at EOF)

@Krinkle Krinkle commented on an outdated diff Jul 11, 2012
@@ -49,7 +50,7 @@ public function doAction() {
if ( !$row ) {
$this->setError( 'invalid-input', 'Runresults ID not found.' );
- }
+ }
Krinkle Jul 11, 2012

(trim trailing whitespace)

@Krinkle Krinkle and 1 other commented on an outdated diff Jul 11, 2012
@@ -59,6 +59,7 @@ public function doAction() {
$error = $request->getInt( 'error', 0 );
$status = $request->getInt( 'status', 2 );
$reportHtml = $request->getVal( 'report_html', '' );
+ $reportJSON = $request->getVal( 'report_json', '' );
Krinkle Jul 11, 2012

Please add validation here. At the very least roundtrip it once to verify it contains to syntax errors. This also clears out any whitespace or other oddities.

Ideally it would also walk through and validate the contents, but that could be added later.

jayarjo Jul 11, 2012

Should I add a handy getJSON method to WebRequest class, or local validation will be enough?

Talking about contents validation, is it ok to add a method directly to SaverunAction, or what would be a good place for it?

Krinkle Jul 11, 2012

A WebRequest::getJSON method sounds great. Make sure you allow for a different default (some usage might want to default to something like "{}" in case of invalid, others might want to get PHP boolean false and abort instead of silently failing).

Content validation, I'd put it a public static method of ResultAction.

jayarjo Jul 11, 2012

ResultAction, hm... ok.

jayarjo Jul 11, 2012

Still not sure about contents validation, since as a single function it turns out to be quite ugly :|


Does this one require anything else to be merged?

jQuery Foundation member

I haven't had time to verify that this works, and I am not sure about the format yet.


Is there any plan to merge or implement something like this. Inability to export results to third party scripts is the only thing that is holding off our node js module... Quite sad... :|

Did jQuery already switch to Yeti? Should we consider testswarm as doomed project?

jQuery Foundation member

I'm sorry I can't get to this yet, but to give a quick heads up:

  • No, we didn't switch to Yeti.
  • TestSwarm is very much alive and being worked on continuously.
  • I understand you want more detail, but there is already an export feature that contains a lot of detail.

Why exactly do you need to export results of the contents of individual runs? How does that fit in the automated continuous integration picture?

Just in case you missed out on recent developments, this is what we currently got in master:

The most important piece of information from a continuous integration perspective (e.g. Jenkins) is whether the entire job that was submitted to TestSwarm succeeded or failed and in case of failure you need to give the developer enough information to know what failed where and how.

I'd say the current version of TestSwarm master provides more than enough information to know exactly what failed where and how. The API reports:

  • the job/commit that contains the regression
  • the run inside that job that contained the actual failed test (e.g. "test suite" or "module" whatever your framework calls it)
  • the browser(s) that run was distributed to that resulted in failure
  • url(s) to the test results of those browser runs, which show a DOM snapshot that shows each individual test and each individual assertion and a stacktrace for failures (the snapshot is provided by the test framework, so whether or not it features a stacktrace depends on your framework of choice))

That's what is available right now in the API, I'd say that's a pretty high level detail that should be more than enough.

Don't get me wrong, I very much like the idea of also storing the individual assertions in a unified format. And as far as I'm concerned we'll definitely get that some day. But it doesn't seem to really provide much usefulness for the end user, so it doesn't justify putting effort into right now – when there's more important things to work on.

From what I can see having this satisfies only 2 minor use cases, neither of which add an ability, they only make things a little more convenient.
1. The test results page can be generated from the unified storage, so that it has a consistent presentation regardless of whether you used QUnit or Mocha etc. This is nice-to-have but hardly a problem solver, these frameworks have great built-in html formatters that you're probably familiar with already (and might even prefer).
1. Embedding the individual assertion descriptions in your continuous integration reporter (as opposed to "just" the job name, module name, browser name and link to where the assertion descriptions are).


It's just that during development it's quite useful and convenient (and also calming) to run test suites before any commit. Of course there are hooks and possibility of canceling commit operation, but it looks like unnecessary complication to the process. It is much easier to simply run all test suites at any desired moment and see if any of them fails, because of recent changes. Failure details are crucial part of the experience I think - what test has failed, what was expected and what was the actual result, as well as overall stats. And it's just quicker to do from console. Script opens tunnel if required, launches particular browser instances from browserstack (for example), retrieves results and displays them in the console, similar to how Yeti does.

The change will hardly complicate anything in the code and should be quite straightforward to implement I think. Time to make a working solution might easily compare to the time you spent writing why you shouldn't do it :)

jQuery Foundation member

@jayarjo The priority has been made clear. If it's so simple, why don't you just send a PR?


Well... I did, we are commenting under it.

jQuery Foundation member

Oh sorry. I should've looked at the context :-/ I'll go hide now...

@Krinkle Krinkle commented on the diff Feb 20, 2013
@@ -72,7 +72,44 @@ public function getBool( $key ) {
public function getInt( $key, $default = 0 ) {
return intval( $this->getVal( $key, $default ) );
+ /**
+ * Interpret a value as JSON string and return a parsed object. Optionally
+ * throw an exception if value cannot be parsed.
+ * @param $key string
+ * @param $default mixed
+ * @param $throw bool If true will throw an exception, if value cannot be parsed
+ * @return array|object|mixed JSON string parsed to object or default value
+ */
+ public function getJSON( $key, $default = null, $throw = false ) {
+ $val = $this->getRawVal( $this->raw, $key, $default );
+ $json = json_decode( $val );
+ if ( is_null( $json ) ) {
+ if ( !$throw_exception ) {
Krinkle Feb 20, 2013

PHP Error: Undeclared variable $throw_exception.

Should probably be $throw instead.

@Krinkle Krinkle commented on the diff Feb 20, 2013
+ QUnit.log = function(assertion) {
+ results[moduleCount].tests[testCount].assertions.push({
+ actual: assertion.actual,
+ expected: assertion.expected,
+ message: assertion.message,
+ result: assertion.result ? 1 : 0, // number is more convenient to handle
+ source: assertion.source || ''
+ });
+ window.TestSwarm.heartbeat();
+ };
+ QUnit.testDone = function(test) {
+ var alias = results[moduleCount].tests[testCount];
+ alias['duration'] = +(new Date()) - testTime;
Krinkle Feb 20, 2013

Use dot notation instead.

@Krinkle Krinkle was assigned Feb 20, 2013
jQuery Foundation member

@jayarjo Hm.. so how do you reference testswarm from the command line in that case?

It would need to install TestSwarm first (or are you providing it hosted as a service somewhere), know the url and authentication for TestSwarm, then publish the test suite from the current working directory somewhere on a web server and make it accessible to the browser clients. From that point on it is pretty straight forward with node-testswarm (or something like it).

Anyway, I'll get on this after I finish and land the projects branch, which is a big refactoring that I don't want to rebase because of this, I'd rather do it the other way around. Should land within a week.

Meanwhile, could you update your PR to the latest master and maybe clarify the individual commits? E.g. keep out unrelated commits (if any) and maybe put those in a separate topic branch/PR. And squash commits that are fixups (whitespace, typos, follow-up bug fix). This PR should probably only have 1 or 2 commits in it.

jQuery Foundation member

@jayarjo do you have any interest in bringing this up to date? @Krinkle do you have any interest in bringing it up to date? Otherwise we should probably close this, it's been open for almost 2 years without comments or update.

@Krinkle Krinkle removed their assignment Jun 23, 2015
@Krinkle Krinkle closed this Jun 23, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment