Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up Casper tests #5330

Closed
JohnONolan opened this Issue May 25, 2015 · 10 comments

Comments

Projects
None yet
4 participants
@JohnONolan
Copy link
Member

commented May 25, 2015

Is it possible to replace our custom casper implementation with something like https://github.com/ronaldlokers/grunt-casperjs to take advantage of Async Parallel and speed up our Casper tests significantly?

@ErisDS

This comment has been minimized.

Copy link
Member

commented May 28, 2015

I have taken a brief look at that module, and it doesn't seem like it's going to work for us. I only tried it briefly but not sure that we can configure it to do the things we want to.

Second problem - the way the parallellallisation is done is using deprecated features from grunt.

My recommendation would be to rewrite our own casper spawning util, such that each file is split out and run as a parallel/concurrent task, and have the runner start them all and manage them using promises.

This concept of parallelising the tests could be extended to use something like https://github.com/sindresorhus/grunt-concurrent to run the functional tests, unit tests and ember unit tests all at the same time - as they should be safe to run together.

All of the other tests use the same DB I think, so need to be in series (but are pretty fast anyway).

@ErisDS

This comment has been minimized.

Copy link
Member

commented Jun 27, 2015

Rather than trying to run individual bits of the tests concurrently, we should definitely do this: http://docs.travis-ci.com/user/speeding-up-the-build/

Using this mechanism, we could run our unit tests, integration & route tests as one suite and the casper (or whatever we replace them with) tests as a separate 'suite' which only runs once (say on the first build) sort of how we're discussing here: #2029 (comment)

@hoxoa

This comment has been minimized.

Copy link
Contributor

commented Jun 30, 2015

Will take a look at this.

Another thing regarding speed, on every build (merge and PR) these two task are executed shell:ember:initand shell:bower which download and install a lot of modules. Why not caching them, too?

Just tested that, see travis.yml and the build here. It's not much faster, but a bit.

@ErisDS

This comment has been minimized.

Copy link
Member

commented Jul 1, 2015

Caching in the build was implemented when we only had npm modules, and I don't think anyone's really thought about it since then. This seems like a smart move :)

@hoxoa

This comment has been minimized.

Copy link
Contributor

commented Jul 8, 2015

I have a working version. The average build time currently is nearly 2 hours, when running casper test only once it goes down to nearly 1 hour (half the time).

I made a script which checks for the first job and then runs the casper tests every build on the first job (merge + pr). The new travis task is executed by travis on every job and when it's the first, than the functional task is also executed. I changed it this way, because everyone who knows that he runs npm test or grunt validate can do this furthermore and travis knows what to do, too.

You can see two working builds here, first and second.

The first one looks a bit different because I forgot the -- verbose.

The different between the both is, that the second has the script in a .sh file in the new scripts folder and on the first, the script is in the travis.yml file.
When you check the output (first here and second here), the second – in my opinion – is much cleaner because only the echos are outputted and not the whole script .

When whe choose the second, I will put the code coverage script in the scripts folder, too.

@ErisDS

This comment has been minimized.

Copy link
Member

commented Jul 9, 2015

Hey there @hoxoa thanks for plugging away on this. I really like that you show different approaches, it's really helpful 👍

My personal preference would be to keep the scripts inside the travis.yml file - the first option. As the scripts are only used by travis, I think it's better to keep the code in the travis file, so that it's straightforward to understand for anyone new to the repo. Adding more files also adds clutter to the root of the repository.

If the scripts were used in multiple places, it might make more sense, but for now I think stick to the first option.

Otherwise this is looking awesome, cutting our build times in half! 👯

@hoxoa

This comment has been minimized.

Copy link
Contributor

commented Jul 10, 2015

@ErisDS Thanks.

Ok, then I will make a PR with the first option.

Is the way I implemented this, with the changes in the Gruntfile ok?

@jaswilli

This comment has been minimized.

Copy link
Member

commented Jul 10, 2015

Why is it okay to only run the functional tests against one configuration in the build matrix? Just because everything looks alright with node v0.10.40 and sqlite3 doesn't mean it's working for any other combination of node and database versions.

The first priority should be fixing the things that cause the builds to fail. Only a very, very small number of "random travis failures" are truly caused by a hiccup on travis--most of them are triggered by bugs.

If there's a feeling that the overall build time is too long I'd much rather just drop support for io.js v1.2.0. Anyone inclined to be using io.js is almost certainly not going to be using version 1.2.0--testing against it likely provides no one any useful information.

@ErisDS

This comment has been minimized.

Copy link
Member

commented Jul 10, 2015

I was rather thinking to run the functional tests for one of each DB and node env - so they would run 3 times. node 0.10 + sqlite, node 0.12 + mysql & io.js + pg. I thought I had written that in here :/

The database is highly unlikely to affect the functional tests, and therefore I think running these tests 9 times is redundant - I think one per node version is enough?

Only a very, very small number of "random travis failures" are truly caused by a hiccup on travis--most of them are triggered by bugs.

It seems to me that these are as likely to be bugs in the tests as they are bugs in the code and it concerns me that it's so difficult to track down the cause when an issue does arise. I think moving towards relying more heavily on ember unit tests is a sensible long term plan?

P.S. I'd also be in favour of dropping our io.js tests - they're wildly out of date.

ErisDS added a commit to ErisDS/Ghost that referenced this issue Jul 10, 2015

Remove io.js from the build matrix
refs TryGhost#5330

- io.js 1.2 is massively out of date, testing against it isn't useful
@ErisDS

This comment has been minimized.

Copy link
Member

commented Oct 9, 2015

Going to close this as the current plan is to remove the casper.js tests in favour of something else. @kevinansfield is working on the plan for this.

@ErisDS ErisDS closed this Oct 9, 2015

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.