Test adapters with all frameworks versions #72

Merged
merged 8 commits into from Jul 5, 2016

Conversation

Projects
None yet
2 participants
@flore77
Contributor

flore77 commented Jul 2, 2016

No description provided.

@jzaefferer

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 2, 2016

Contributor

I guess the CI build passed because it didn't run the new script at all?

Contributor

jzaefferer commented Jul 2, 2016

I guess the CI build passed because it didn't run the new script at all?

@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 2, 2016

Contributor

Yes! I don't think we should integrate this script in the CI build, because I doubt that we can make the adapters pass for all versions. I made it for having an overview and maybe investing for some more recent versions. For example for jasmine 2.3.0, our adapters fail, which is quite strange.

Contributor

flore77 commented Jul 2, 2016

Yes! I don't think we should integrate this script in the CI build, because I doubt that we can make the adapters pass for all versions. I made it for having an overview and maybe investing for some more recent versions. For example for jasmine 2.3.0, our adapters fail, which is quite strange.

@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 2, 2016

Contributor

And for the other version of 2.3.x (2.3.1, ...) is working

Contributor

flore77 commented Jul 2, 2016

And for the other version of 2.3.x (2.3.1, ...) is working

@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 2, 2016

Contributor

Also not all of them are not compatible with our adapters, some will fail because they do not have some functionalities to cover the test fixtures, for example:

  • I have seen that a version of QUnit was failing because it didn't have the skip method
  • other Qunit versions fail because they do not have the capability of nesting suites
Contributor

flore77 commented Jul 2, 2016

Also not all of them are not compatible with our adapters, some will fail because they do not have some functionalities to cover the test fixtures, for example:

  • I have seen that a version of QUnit was failing because it didn't have the skip method
  • other Qunit versions fail because they do not have the capability of nesting suites
@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 2, 2016

Contributor

For example, for QUnit only the versions up to 1.20.0 will pass, because in the aforementioned version were the nested modules implemented, from the QUnit history file:

Core: Implement Nested modules

Should we add some extra fixtures ? Like a fixture without nested modules and to take care of which version to run against which test fixture ? Or to run this script against the simplest fixture possible (only a passed test, one failed and a suite, something like this) ?

Contributor

flore77 commented Jul 2, 2016

For example, for QUnit only the versions up to 1.20.0 will pass, because in the aforementioned version were the nested modules implemented, from the QUnit history file:

Core: Implement Nested modules

Should we add some extra fixtures ? Like a fixture without nested modules and to take care of which version to run against which test fixture ? Or to run this script against the simplest fixture possible (only a passed test, one failed and a suite, something like this) ?

@jzaefferer

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 2, 2016

Contributor

I'm still interested in running this all the time. How about a "whitelist" (or "blacklist"?) of versions that we know to fail? We'd assume that newer releases will pass, so if newer releases regress, we'd notice the next time we run this test.

If its too slow to run for every build, we could at least run it locally every now and then and know that the output is reliable (already ignoring known issues).

Contributor

jzaefferer commented Jul 2, 2016

I'm still interested in running this all the time. How about a "whitelist" (or "blacklist"?) of versions that we know to fail? We'd assume that newer releases will pass, so if newer releases regress, we'd notice the next time we run this test.

If its too slow to run for every build, we could at least run it locally every now and then and know that the output is reliable (already ignoring known issues).

@jzaefferer

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 2, 2016

Contributor

Let's start simple, e.g. "blacklist" all QUnit versions below 1.20.0, since we know those don't work as expected. We could break that down when we have a need for it, e.g. someone reports an issue with an older QUnit version with js-reporters.

Contributor

jzaefferer commented Jul 2, 2016

Let's start simple, e.g. "blacklist" all QUnit versions below 1.20.0, since we know those don't work as expected. We could break that down when we have a need for it, e.g. someone reports an issue with an older QUnit version with js-reporters.

@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 2, 2016

Contributor

If its too slow to run for every build

It would be a little bit.

And this "blacklist" should we write it down, so that we can know when a new version will appear, or which version were functioning and which not ?

Contributor

flore77 commented Jul 2, 2016

If its too slow to run for every build

It would be a little bit.

And this "blacklist" should we write it down, so that we can know when a new version will appear, or which version were functioning and which not ?

@jzaefferer

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 3, 2016

Contributor

It would be a little bit.

How bad is it though? Its okay for a CI build to take a few minutes. We can update .travis.yml to run this separate script, so that we don't have to make it part of npm test.

And this "blacklist" should we write it down, so that we can know when a new version will appear, or which version were functioning and which not ?

I'd list known failing versions (where we want to accept/ignore those failures). And assume new versions are going to work fine. If new versions fail, we want to know about that.

Contributor

jzaefferer commented Jul 3, 2016

It would be a little bit.

How bad is it though? Its okay for a CI build to take a few minutes. We can update .travis.yml to run this separate script, so that we don't have to make it part of npm test.

And this "blacklist" should we write it down, so that we can know when a new version will appear, or which version were functioning and which not ?

I'd list known failing versions (where we want to accept/ignore those failures). And assume new versions are going to work fine. If new versions fail, we want to know about that.

@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 3, 2016

Contributor

How bad is it though?

It takes some good minutes (13 mins), but is better to ensure a proper developing 😃

I integrated the script into 3 tests, that are testing the failing (not working) versions to be constant.

Also I don't know why the build is failing on node stable. Any idea?

Some versions have caught my attention:

  • Jasmine 2.3.0
  • Mocha 2.1.0, 2.5.0

I investigated for Jasmine 2.3.0, the adapter is working, Mocha is receiving SIGINT and aborts the runner. I have done some debug, but I still didn't find the source.

Contributor

flore77 commented Jul 3, 2016

How bad is it though?

It takes some good minutes (13 mins), but is better to ensure a proper developing 😃

I integrated the script into 3 tests, that are testing the failing (not working) versions to be constant.

Also I don't know why the build is failing on node stable. Any idea?

Some versions have caught my attention:

  • Jasmine 2.3.0
  • Mocha 2.1.0, 2.5.0

I investigated for Jasmine 2.3.0, the adapter is working, Mocha is receiving SIGINT and aborts the runner. I have done some debug, but I still didn't find the source.

@@ -0,0 +1,14 @@
+module.exports = {
+ 'qunitjs': ['1.9.0', '1.10.0', '1.11.0', '1.12.0-pre', '1.12.0', '1.13.0',

This comment has been minimized.

@jzaefferer

jzaefferer Jul 4, 2016

Contributor

Please add comments here about known issues, and why we're ignoring them. Can be very brief, but better now while we still know what's going on.

@jzaefferer

jzaefferer Jul 4, 2016

Contributor

Please add comments here about known issues, and why we're ignoring them. Can be very brief, but better now while we still know what's going on.

@jzaefferer

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 5, 2016

Contributor

Mocha? Should be "Jasmine" here, right?

Contributor

jzaefferer commented on test/versions/failing-versions.js in e806188 Jul 5, 2016

Mocha? Should be "Jasmine" here, right?

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 5, 2016

Contributor

No, it is Mocha, because Mocha is the framework we are using for testing. And Mocha receives SIGINT and aborts its runner, from my debugging I saw that the Jasmine adapter emits also the runEnd event, so it should function well, I think something happens within Mocha, I'm not 100% sure.

Contributor

flore77 replied Jul 5, 2016

No, it is Mocha, because Mocha is the framework we are using for testing. And Mocha receives SIGINT and aborts its runner, from my debugging I saw that the Jasmine adapter emits also the runEnd event, so it should function well, I think something happens within Mocha, I'm not 100% sure.

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 5, 2016

Contributor

Should I make it more clearer the comment?

Contributor

flore77 replied Jul 5, 2016

Should I make it more clearer the comment?

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 5, 2016

Contributor

Yeah, just expand the comment a little, otherwise we'll be confused about this the next time we look...

Contributor

jzaefferer replied Jul 5, 2016

Yeah, just expand the comment a little, otherwise we'll be confused about this the next time we look...

@flore77

This comment has been minimized.

Show comment
Hide comment
@flore77

flore77 Jul 5, 2016

Contributor

Done.

Contributor

flore77 commented Jul 5, 2016

Done.

@jzaefferer

This comment has been minimized.

Show comment
Hide comment
@jzaefferer

jzaefferer Jul 5, 2016

Contributor

👍

Contributor

jzaefferer commented Jul 5, 2016

👍

@flore77 flore77 merged commit 2933f83 into master Jul 5, 2016

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details

@flore77 flore77 deleted the working-versions branch Jul 10, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment