Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression test Suite #5

Closed
cure53 opened this issue Feb 28, 2014 · 28 comments
Closed

Regression test Suite #5

cure53 opened this issue Feb 28, 2014 · 28 comments

Comments

@cure53
Copy link
Owner

cure53 commented Feb 28, 2014

We need a proper regression test suite to make sure minor changes don't destroy protection of known bad.

cc @fhemberger

@fhemberger
Copy link
Contributor

I know, I know … sorry for being late to the party. ;)

@fhemberger
Copy link
Contributor

We should split the code of the demo into meaningful chunks so we can try to make unit tests out of them:

  • synchronous XSS tests: alert(test-id) should not be fired
  • asynchronous XSS tests: on(load|error|etc)=alert(test-id) should not be fired
  • XSS tests requiring interaction: alert(test-id) should not be fired
  • general mathml parsing test
  • general svg parsing test
  • svg xlink test: attributes should be removed
  • escaping tests: everything should be properly escaped
  • etc.

What would be the best way to load those patterns? A JSON file containing an array of pattern/expected results (when comparing DOM fragments)?

Then I'd love to go with QUnit to run those tests in the browser. Though not all tests may succeed (depending on the engine), it would give a better picture.

@cure53
Copy link
Owner Author

cure53 commented Mar 1, 2014

I think JSON is best indeed. We should maybe do both the canary-test (alert() or foo()) and work with expectations. We need to also catch cases where benign markup is crippled too much.

In regards to JSON as an option: what labels would you propose / see as useful and necessary?

@fhemberger
Copy link
Contributor

The expectation of a passed test would be that foo() was not called (I don't want to have alerts all over the place in a unit test) and the purified result matches a given string.

For the JSON format, I think the following should be sufficient:

[
    {
        "pattern": "<p><img src=\"\" onerror=foo(1)></p>",
        "expected": "<p></p>"
    }
]

@cure53
Copy link
Owner Author

cure53 commented Mar 1, 2014

I added /DOMPurify/blob/master/tests/expect.json

Would that be okay format-wise? If so I would gradually start filling it with data.

I am a wee bit worried about minimal browser-based mismatches (getting test fails in browser 1 when they would pass in browser 2 etc.). I think we should come up with a solution once we run into the problem though.

@mozfreddyb
Copy link
Contributor

One could also test for script execution instead of a pre-defined output result. i.e. inserting things into the DOM and defining foo() as a part of the test bed.

@fhemberger
Copy link
Contributor

@mozfreddyb See two comments above. That was the plan. ;) We check for both, because there are cases where nothing is executed (e.g. when testing for DOM clobbering, stripping out certain tags or attributes).

@cure53
Copy link
Owner Author

cure53 commented Mar 25, 2014

@fhemberger I think we are essentially one step away from finishing this ;) How should we do it? Small test runner in the browser that simply iterates over the expectations and compares with the output? Or do we need something more complex?

@mathiasbynens
Copy link
Contributor

Small test runner in the browser that simply iterates over the expectations and compares with the output?

FWIW, that’s what I had in mind — seems sufficient.

@cure53
Copy link
Owner Author

cure53 commented Mar 25, 2014

I'd love to integrate the test-run into the build process and show the fancy banner on the GitHub page. Anyone has done this before? If not I'll have a look at how that works.

@mathiasbynens
Copy link
Contributor

I’ve done this before with Travis CI. The problem here is that we need to run the tests in a browser rather than a command-line environment. One solution would be to use PhantomJS (a headless WebKit-based browser), but then we’d still have to test other browsers manually.

@fhemberger
Copy link
Contributor

I thought about using QUnit for it, parsing the JSON file and running the tests in the browser.
Regarding the badge: Browers might return different DOM structures and so tests might fail (which is not an issue with DOMPurify).

@cure53
Copy link
Owner Author

cure53 commented Mar 26, 2014

@mathiasbynens @fhemberger Yeah, I agree. The badge might not make sense here. About QUnit: I have no objection, I prefer a framework over a self-built tool. Ready when you are!

@fhemberger
Copy link
Contributor

Ok, finally I committed my first take on tests: Run npm install first and npm test to fire up the static file server. QUnit tests are available as http://localhost/test/.

Known issues:

  • The node.js server triggers possible EventEmitter memory leak warning when running the test (ugly, but can be ignored for the tests themselves)
  • 9 sanitization tests fail at the moment (Chrome 33/Mac)
  • 3 XSS tests are triggered

Please add meaningful title attributes for each test in expect.json, those will be shown on the tests, so it's easier to figure out what went wrong.

Feedback appreciated. ;)

@cure53
Copy link
Owner Author

cure53 commented Mar 29, 2014

@fhemberger Nice, thanks! I'll have a look at the false alerts tonight and see what causes them.

@fhemberger
Copy link
Contributor

Also added JSHint support: You can now check for possible JS syntax issues with npm run jshint. npm test now runs both JSHint and QUnit.

@cure53
Copy link
Owner Author

cure53 commented Mar 29, 2014

The browsers unfortunately show absurd differences. I managed to get Chrome to a zero for sanitation tests but don't yet know why the alert triggers. I assume, i's jQuery's html(). Still testing.

@cure53
Copy link
Owner Author

cure53 commented Mar 29, 2014

Sigh It is jQuery's html():

//compare:
document.body.innerHTML='<option><style></option></select><b><img src=xx: onerror=alert(1)></style></option>' //safe

$('body').html('<option><style></option></select><b><img src=xx: onerror=alert(1)></style></option> ') // unsafe

@cure53
Copy link
Owner Author

cure53 commented Mar 29, 2014

I think for us the issue resolves easily: We already fixed this while rewriting content for usage in $(). The $(elm).html() logic is nothing different. It just needs to be sure, that the option SAFE_FOR_JQUERY is set to true. In the tests it's false (the default) - thus the warning about the 3 alerts.

@fhemberger
Copy link
Contributor

Ok, I just pushed an update: The XSS tests are now run for native DOM methods and jQuery separately. You were right, SAFE_FOR_JQUERY did the trick.

@cure53
Copy link
Owner Author

cure53 commented Mar 30, 2014

Perfect, thanks!

Now we have one final problem to tackle: The browser's differing output. Should we make the expected field an array and work with indexOf - or better with a literal with browser short-name as label and expectation as value? Faster and more efficient would be indexOf I believe.

@mathiasbynens
Copy link
Contributor

Efficiency should not be a concern in test suites.

+1 to the object literal approach – makes it easier to read and maintain the whole thing. In the test runner we could just do something like this:

var expectedValues = Object.keys(object).map(function(key) { return object[key]; });
if (expectedValues.indexOf(currentValue) > -1) {
  // pass
} else {
  // fail
}

@cure53
Copy link
Owner Author

cure53 commented Mar 31, 2014

@fhemberger Sure the push went through? I still get the "alert warnings" on Chrome :)

@fhemberger
Copy link
Contributor

@cure53 Sorry, it got stuck. Just pushed it again. Should work now.

@cure53
Copy link
Owner Author

cure53 commented Apr 5, 2014

@fhemberger Thx, it does :) Two questions remain imho:

a) Can we get rid of this in a reasonable way?
purify.js: line 225, col 47, 'NodeList' is not defined.

b) How do we best implement individual test cases. One example: I want to make sure we get what we expect in case a very specific set of config flags is given (see last comments in #15). How do we best approach this systematically?

@cure53
Copy link
Owner Author

cure53 commented Apr 11, 2014

@fhemberger @mathiasbynens

I thought about it for quite some time and came to the point of realizing, that an array is better than an object with labels to identify browsers. Not for performance reasons of course - but for maintenance benefits. I would want an array of possible sanitation results for each vector - but not a map. In a map I would for example have to identify browser versions too - and have a label for Chrome 33, Chrome 36 and so on. That would bloat the effort over time and not help anyone.

Can we change the code so it checks for a match in an array of strings rather than a single string? Then we can finally close this task and have that mandatory extra bit of security and reliability we were lacking so far (lack of unit tests caused several bypasses in the recent past. that must not happen and is prio one to avoid).

@cure53
Copy link
Owner Author

cure53 commented Apr 11, 2014

I implemented a basic QUnit.assert.contains() and added tests for FF and Chrome 33. Will now move on to fix Chrome 34+ and IE. Review highly appreciated :)

cure53 pushed a commit that referenced this issue Apr 11, 2014
see #5
Added additional titles
All tests green on Chrome 36 / Canary
cure53 pushed a commit that referenced this issue Apr 11, 2014
see #5
Fixed tests for IE11
cure53 pushed a commit that referenced this issue Apr 11, 2014
see #5
Fixed an intolerance on IE11
Fixed the tests for IE11
Fixed the expectations
cure53 pushed a commit that referenced this issue Apr 11, 2014
see #5
Tests now work on IE10,11,Op,GC,Sf,FF,Chromium
Added more cases for IE10
Re-enabled the jQuery tests
cure53 pushed a commit that referenced this issue Apr 11, 2014
see #5
Fixed a test-case for Safari 7.0.3
@cure53
Copy link
Owner Author

cure53 commented Apr 11, 2014

This can be closed, we now have the test suite working on: IE10, 11, FF, Chrome 33, 36, Opera 15+, Safari 7+

@cure53 cure53 closed this as completed Apr 11, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants