Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Parallelise `cabal build` #976

Open
23Skidoo opened this Issue · 31 comments

5 participants

@23Skidoo
Collaborator

Now that the package-level parallel install has been implemented (see #440), the next logical step is to extend cabal build with support for building multiple modules, components and/or build variants (static/shared/profiling) in parallel. This functionality should be also integrated with cabal install in such a way that we don't over- or underutilise the available cores.

A prototype implementation of a parallel cabal build is already available as a standalone tool. It works by first extracting a module dependency graph with 'ghc -M' and then running multiple 'ghc -c' processes in parallel.

Since the parallel install code uses the external setup method exclusively, integrating parallel cabal build with parallel install will require using IPC. A single coordinating cabal install -j N process will spawn a number of setup.exe build --semaphore=/path/to/semaphore children, and each child will be building at most N modules simultaneously. An added benefit of this approach is that nothing special will have to be done to support custom setup scripts.

An important issue is that compiling with ghc -c is slow compared to ghc --make because the interface files are not cached. One way to fix this is to implement a "build server" mode for GHC. Instead of repeatedly running ghc -c, each build process will spawn at most N persistent ghcs and distribute the modules between them. Evan Laforge has done some work in this direction.

Other issues:

  • Building internal components in parallel requires knowing their dependency graph (this is being implemented as part of integrating cabal repl patches).
  • Generating documentation in parallel may be only safe for build-type: Simple.
@bos
Owner

This will be a huge win if it can make effective use of all cores. I've had quite a few multi-minute builds of individual packages, where the newly added per-package parallelism only helps with dependencies during the very first build, but not at all during ongoing development.

@23Skidoo
Collaborator

@bos The main obstacle here is reloading of interface files, which slows down the parallel compilation considerably compared to ghc --make. See e.g. Neil Mitchell's Shake paper, where he found that "building the same project with ghc --make takes 7.69 seconds, compared to Shake with 11.83 seconds on one processor and 7.41 seconds on four processors." So far, the most promising approach seems to be implementing a "compile server" mode for GHC.

@23Skidoo
Collaborator

An e-mail from @dcoutts that describes the "compile server" idea in more detail:

So here's an idea I've been mulling over recently...

For IDEs and build tools, we want a ghc api interface where we have very
explicit control over the environment in which new modules are compiled.
We want to be in full control, not using --make, and not using any
search paths etc. We know exactly where each .hi and .o file for all
dependent modules are. We should be able to build up an environment of
module name to (interface, object code) by starting from empty, adding
packages and individual module (.hi, .o) files.

Now that'd give us an api a lot like the current command line interface
of ghc -c single shot mode, except that we would be able to specify .hi
files on the command line rather than having ghc find them by searching.

But once we have that api, it'll be useful for IDEs, and useful for a
ghc server. This should give us the performance advantages of ghc --make
but still give us the control and flexibility of single shot mode. I'll
come to parallel builds in a moment.

The way it'd work is you start the server with some initial environment
(e.g. the packages) and you tell it to compile a module, then you can
tell it to extend its environment e.g. with the module you just compiled
and use the extended environment to compile more modules. So clearly you
could do the same thing as ghc --make does but with the dependency
manager being external to ghc.

Now for parallelism. Suppose we have two cores. We launch two ghc server
processes with the same initial package environment. We start compiling
two independent modules. Now we load the .hi files into *both* ghc
server processes to compile more modules. (In practice we don't load
them into each server when they become available, rather we do it on
demand when we see the module we need to compile needs the module
imports in question based on our module dep graph).

So, a short analysis of the number of times that .hi files are loaded:

In the current ghc --make mode, each .hi file is loaded once. So let's
say M modules. In the current ghc -c mode, for M modules we're loading
at most m * m/2 modules (right?) because in a chain of M modules we have
to load all previous .hi files for each ghc -c invocation.

In the hypothetical ghc server mode, with N servers, the worst case is
something like M * N module loads. Also, the N is parallelised. So the
single threaded performance is the same as --make. If you use 8 cores,
the overhead is 8 times higher in total, but distributed across 8 cores
so the wall clock time is no worse.

Actually, it's probably more sensible to look not at the cost of loading
the .hi files for M modules, but for P packages which is likely the
dominant cost. Again, it's P cost for the --make mode, and M * P for the
ghc -c mode, but N * P for the server mode. So this means it might not
be necessary to do the whole-package .hi file optimisation since the
cost is dramatically reduced.

So overall then, there's two parts to the work in ghc: extend the ghc
api to give IDEs and build managers this precise control over the
environment, then extend the main ghc command line interface to use the
new ghc api feature by providing a --server mode. It'd accept inputs on
stdin or something. It only needs very minimal commands: extend the
environment with a .hi .o pair and compile a .hs file. You can assume
that packages and other initial environment things are specified on the
--server command line.

Finally if there's time, add support for this mode into cabal, but that
might be too much (since that needs a dependency based build manager).

I'll also admit an ulterior motive for this feature, in addition to use
in cabal, which is that I'm working on Visual Studio integration and so
I've been thinking about what IDEs need in terms of the ghc api and I
think very explicit control of the environment is the way to go.
@tibbe
Owner

Even though using ghc -c leads to a slowdown on one core, having it as an option (for people with more cores) in the meantime seems worthwhile to me.

@bos
Owner

@tibbe, I thought the point was that ghc -c doesn't break even until 4 cores. Mind you, Neil was surely testing on Windows, where the OS and filesystem could be reasonably expected to hurt performance quite severely.

@tibbe
Owner

@bos I've heard the number 2 tossed around as well, but we should test and see. Doing parallelism at the module level should also expose many more opportunities for parallelism. The current parallel build system suffers quite a bit from lack of that (since there are lots of linear chains of package dependencies.)

@nh2

What about profiling builds? Due to the structure of the compilations (exactly the same things as in a normal compilaiton are built), I'd guess might easily be run in parallel, and we might get almost ~x2 time saved.

@23Skidoo
Collaborator

@nh2 Parallel cabal build will make this possible.

@23Skidoo 23Skidoo was assigned
@nh2

I am currently working on this. I got good results with ghc-parmake for compiling large libraries and am now making executables build in parallel.

@23Skidoo
Collaborator

@nh2 Cool! BTW, I proposed this as a GSoC project for this summer. Maybe we can work together if my project gets accepted?

@23Skidoo
Collaborator

@nh2

I got good results with ghc-parmake for compiling large libraries

I'm interested in the details. How large was the speedup? On how many cores? In my testing, the difference was negligible.

@nh2

How large was the speedup? On how many cores?

The project I'm working on has a library with ~400 modules and 40 executables. I'm using an i7-2600K with 4 real (8 virtual) cores. For building the library only, I get:

* cabal build:                                              4:50 mins
* cabal build --with-ghc=ghc-parmake --ghc-options="-j 2":  4:20 mins 
* cabal build --with-ghc=ghc-parmake --ghc-options="-j 4":  3:00 mins 
* cabal build --with-ghc=ghc-parmake --ghc-options="-j 8":  2:45 mins

I had to make minimal changes to ghc-parmake to get this to work, and thus got a 2x speedup almost for free :)

As you can see, the speed-up is not as big as we can probably expect from ghc --make itself being parallel or your --server - due to the caching, those should be a good bit faster, and I hope your project gets accepted. I'd be glad to help a bit if I can - but while I'm ok with hacking around on cabal, I've never touched GHC.

Building the executables in parallel is independent from all this and will also probably be a small change.

@23Skidoo
Collaborator
* cabal build:                                              4:50 mins
* cabal build --with-ghc=ghc-parmake --ghc-options="-j 2":  4:20 mins 
* cabal build --with-ghc=ghc-parmake --ghc-options="-j 4":  3:00 mins 
* cabal build --with-ghc=ghc-parmake --ghc-options="-j 8":  2:45 mins

Nice to hear that it can give a noticeable speedup on large projects. I should try testing it some more.

Building the executables in parallel is independent from all this and will also probably be a small change.

Maybe if you don't integrate build -j and install -j. Then you won't need to implement the IPC design sketched above.

@nh2

@23Skidoo I made a prototype at https://github.com/nh2/cabal/compare/build-executables-in-parallel. It would be nice if you could take a look.

  • I haven't rebased on the latest master yet. Once the other points are sorted out, I'll do that and send a proper pull request (I will probably rewrite my history on that branch as we go towards that).
  • The copying of Semaphore and JobControl from cabal-install is not so nice. Is that the way to go nevertheless or should they be moved to some Internal package in Cabal? Update: We are discussing that here.
  • I still have to sort out that pressing Ctrl-C kills everything nicely and to get failure exit codes right.
  • It looks like I can't use macros (need MIN_VERSION_base) in the Cabal package - is that correct? The way how I work around it is very ugly (just using the deprecated old functions in Exception, creating warnings).
  • We probably want to make parallel jobs a config setting as well, or use the same number as the existing --jobs.

Feedback appreciated.

@nh2

I have updated my branch to fix some minor bugs in my code. I can now build my project with cabal build --with-ghc=ghc-parmake --ghc-options="-j 8" -j8 to get both parallel library compilation and parallel executable building.

The questions above still remain.

@23Skidoo
Collaborator

@nh2 Thanks, I'll take look.

@23Skidoo
Collaborator

@nh2

The copying of Semaphore and JobControl from cabal-install is not so nice. Is that the way to go nevertheless or should they be moved to some Internal package in Cabal?

Can't you just export them from Cabal and remove the copies in cabal-install?

It looks like I can't use macros (need MIN_VERSION_base) in the Cabal package - is that correct?

Yes, this doesn't work because of bootstrapping. You can do this, however:

#if !defined(VERSION_base)
-- we're bootstrapping, do something that works everywhere
#else

#if MIN_VERSION_base(...)
...
#else
...
#endif

#endif

Or maybe we should add a configure script.

@nh2

Yes, this doesn't work because of bootstrapping. You can do this, however

Good idea, but when we do the something that works everywhere, we will still get the warnings, this time only in one of the two phases.

Or maybe we should add a configure script.

If that would be enough to find out the version of base, that sounds like the better solution. I don't know what the reliable way to find that out is, though.

@23Skidoo
Collaborator

I have another idea - since Cabal only supports building on GHC nowadays, you can use

#if __GLASGOW_HASKELL__ < 700
-- Code that uses block
#else 
-- Code that uses mask
#endif
@23Skidoo
Collaborator

@nh2

We probably want to make parallel jobs a config setting as well, or use the same number as the existing --jobs.

We can make cabal build read the jobs config file setting, but it shouldn't be used when the package is built during the execution of an install plan (since there's no way to limit the number of parallel build jobs from cabal install ATM).

@nh2

GLASGOW_HASKELL

Nice, pushed that.

@nh2

I haven't rebased on the latest master yet

Just rebased that.

@23Skidoo
Collaborator

My GSoC 2013 project proposal has been accepted.

@nh2

Awesome! Let's give this build system another integer factor speedup! :)

@nh2

We can make cabal build read the jobs config file setting, but it shouldn't be used when the package is built during the execution of an install plan (since there's no way to limit the number of parallel build jobs from cabal install ATM).

Do you mean with this: When we use install -j and build -j, we get more than n (e.g. n*n) jobs because the two are not coordinated?

@23Skidoo
Collaborator

Do you mean with this: When we use install -j and build -j, we get more than n (e.g. n*n) jobs because the two are not coordinated?

Yes. The plan is to use an OS-level semaphore for this, as outlined above.

@tibbe
Owner

That's what I meant, sounds good. We should use this semaphore here. This way we get parallel profiling lib building for free with install -j.

@23Skidoo
Collaborator

@tibbe

That's what I meant, sounds good. We should use this semaphore here. This way we get parallel profiling lib building for free with install -j.

Yes, that's the plan.

@nh2

@23Skidoo I made a prototype at https://github.com/nh2/cabal/compare/build-executables-in-parallel. It would be nice if you could take a look.

I made a pull request for this #1540, rebased on current master. It's much easier to not lose track of things when they are in pull request form.

@23Skidoo Please tell me if you made recent changes that I should make use of there.

@tibbe
Owner

@23Skidoo I believe this is done now right, or are you still waiting to submit your PR?

@23Skidoo
Collaborator

I need to rework #1572; @dcoutts doesn't want to merge it in the current state. I hope to get it into 1.20.

@rrnewton rrnewton referenced this issue in commercialhaskell/stack
Closed

Develop/Document multi-level parallelism policy #644

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.