Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cold start performance #14

Closed
weavejester opened this issue Nov 7, 2018 · 7 comments
Closed

Cold start performance #14

weavejester opened this issue Nov 7, 2018 · 7 comments

Comments

@weavejester
Copy link

@weavejester weavejester commented Nov 7, 2018

Kaocha looks extremely promising, but the time it takes to start Kaocha is significant. On a barebones test file, with no other dependencies, using the latest Clojure CLI run script, it takes an average of about 7 ±0.3 seconds to load and run on a 2018 MacBook Pro. Cognitect's test runner, on the other hand, completes in a fair consistent time of 1.7 seconds.

While Kaocha is clearly doing a lot more, four times the startup cost is significant, and there's a lot of stuff in kaocha.runner that's loaded in without any checks to see if it's necessary. For instance, kaocha.watch and all its associated dependencies are loaded, whether or not the --watch option is used. Similarly loading kaocha.config could be omitted if no configuration files exist. I suspect you could improve cold start times significantly just by avoiding loading namespaces you don't need up front.

@plexus

This comment has been minimized.

Copy link
Member

@plexus plexus commented Nov 7, 2018

Hi James, great to see you trying out Kaocha.

You are right, Kaocha startup is pretty slow at the moment. It's something I was planning to dig into but hadn't gotten to yet. Skipping loading watch is a good idea, as that prevents core.async from being loaded. That seems to already shave a good chunk off.

config is pretty essential as it also handles buidling up a config map based on command line arguments, but aero could be lazy loaded and skipped if no config file is present. I think deferring loading deep-diff/puget/fipp would also make a big difference, especially since they are only needed in case of test failures.

But at a certain point I think people will also have to accept that Kaocha does a lot more work than Cognitect's runner. You can use Kaocha from the REPL or in --watch mode, I think in an actual dev workflow either of these is the way to go.

Still I might look into what other options we have. If there are big dependencies where we only use a small part of the functionality then rolling a stripped down version of that into Kaocha could make sense. At some point I'd also like to explore using a persistent JVM with boot pods for isolation between runs.

@philomates

This comment has been minimized.

Copy link

@philomates philomates commented Nov 8, 2018

I've noticed significant difference in startup times when running kaocha via deps.edn vs lein, with lein being about twice as fast.
@weavejester are you launching via lein or deps.edn?

@weavejester

This comment has been minimized.

Copy link
Author

@weavejester weavejester commented Nov 8, 2018

I'm launching through deps.edn, though I'm surprised that Leiningen would be faster, considering that it has theoretically more to load.

I agree that using --watch or starting it via the REPL is the way to go. When I wrote Eftest I originally wrote it with the idea of running it via the REPL, though eventually I added a Leiningen plugin as well. However, sometimes it's useful just to kick off a test on its own, and in those cases shaving a few seconds off the time might be useful.

Of course it's easy to say that, and a lot harder to actually do it! I recognize that this may be a time sink that requires a lot of effort for relatively little gain. Still, this is a problem that crops up in Clojure command line tools often, so I wonder if we can generalize a solution. For instance, something like:

(defmacro jit [sym]
  `(do (require '~(symbol (namespace sym))) (find-var '~sym)))

And then you could write:

((jit kaocha.watch/run) config)
@plexus

This comment has been minimized.

Copy link
Member

@plexus plexus commented Nov 8, 2018

I did some quick benching on my machine, a Thinkpad T450s. This is wall time, average of ten runs.

lein clj
with kaocha.watch 9.28 10.85
without kaocha.watch 6.04 6.66
without watch/fipp/puget 5.05 5.04

Deferring the loading of kaocha.watch does make a big difference, that's going immediately onto master. I'll try to do the same with fipp/puget and then revisit this at some later time.

There is indeed a noticeable difference between clj and lein. I'm gonna guess this is an issue with clj/tools.deps and that it will eventually get fixed. In this talk @arrdem mentions tools.deps has an issue where they compute shared dependencies multiple times. Maybe that's the issue here.

@plexus

This comment has been minimized.

Copy link
Member

@plexus plexus commented Nov 8, 2018

Deferred loading of fipp/puget/deep-diff and removing io.aviso.ansi shaved another second off. We're close to twice as fast and the difference between clj and lein seems to be gone. Hurray :) (see updated table above).

Thanks @weavejester for prompting me on this, it's good to already do some measurements at this point to establish a baseline, and it's great to see there were some quick wins.

@plexus

This comment has been minimized.

Copy link
Member

@plexus plexus commented Nov 8, 2018

These improvements have been released in 0.0-266

For reference for myself, I used these commands in a directory containing only an empty src and test directory

{for x in {1..10} ; do /usr/bin/time --verbose clj -Sdeps '{:deps {lambdaisland/kaocha {:local/root "/home/arne/github/kaocha"}}}' -m kaocha.runner 2>&1 ; done} | grep 'wall clock' | sed 's/.*://' | ruby -e 'puts STDIN.each_line.map(&:to_f).inject(:+)/10'
{for x in {1..10} ; do /usr/bin/time --verbose lein run -m kaocha.runner 2>&1 ; done} | grep 'wall clock' | sed 's/.*://' | ruby -e 'puts STDIN.each_line.map(&:to_f).inject(:+)/10'
@plexus plexus closed this Nov 8, 2018
@weavejester

This comment has been minimized.

Copy link
Author

@weavejester weavejester commented Nov 8, 2018

Excellent! Thanks a lot for taking a look at this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.