Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Initial support for parallel 'ghc --make' #1536

merged 9 commits into from Oct 31, 2013


None yet
2 participants

23Skidoo commented Oct 8, 2013

Implements the design described in #976 (a single coordinating cabal install process + a semaphore + multiple setup build processes).

One problem with the current implementation is that when multiple setup build processes are started simultaneously, one of them can take all available cores, forcing the remaining ones to wait. We should be able to fix that by guaranteeing that each process will have at least one core at its disposal.

Not implemented: parallel building of profiling/dynamic variants.

Depends on #1520.

@tibbe tibbe commented on an outdated diff Oct 8, 2013

@@ -0,0 +1,172 @@
+{-# LANGUAGE ForeignFunctionInterface #-}
+{-# OPTIONS_HADDOCK hide #-}
+-- |
+-- Module : Distribution.Compat.IPC
+-- Maintainer : cabal-devel@haskell.org
+-- Portability : portable
+module Distribution.Compat.IPC (Semaphore(..),

tibbe Oct 8, 2013


I don't think exporting the constructor fields is wise as they differ between platforms. Perhaps you could just export semaphoreName if that's all you need outside this module?

@tibbe tibbe commented on an outdated diff Oct 8, 2013

return (Right (BuildOk docsResult testsResult))
pkgid = packageId pkg
buildCommand' = buildCommand defaultProgramConfiguration
- buildFlags _ = emptyBuildFlags {
+ buildFlags ver = emptyBuildFlags {
+ buildSemaphore = if (isJust mSemaphore) && (ver < Version [1,19] [])
+ then error $ "D.C.Install.installUnpackedPackage: "
+ ++ "Cabal version mismatch!"

tibbe Oct 8, 2013


Is this an internal inconsistency error? Could we make it more verbose:

Distribution.Cabal.Install.installUnpackedPackage: internal error: semaphore was given but Cabal version < 1.19


tibbe commented Oct 8, 2013

This is great! LGTM except for the few comments I added.


23Skidoo commented Oct 9, 2013

@tibbe Addressed your comments.


tibbe commented Oct 9, 2013



tibbe commented Oct 11, 2013

Do you need anything more from me in order to merge this?


23Skidoo commented Oct 11, 2013

Well, I wanted @dcoutts to take a look, but he was away at the time.


23Skidoo commented Oct 29, 2013

Log of an IRC discussion with @dcoutts:

21:47 < refold> dcoutts: ok. so how do we proceed?
21:47 < dcoutts> refold: I think it's important to use the semaphore right so we can all stay sane, so here's my suggestion...
21:48 < dcoutts> start with a cabal-install JobLimt impl using the OS-sem
21:48 < dcoutts> so add the unix + win OS-sem impls
21:48 < dcoutts> add a JobLimit based on that
21:48 < dcoutts> using it exclusively within cabal-install initially
21:49 < dcoutts> so that's implicitly using the "first core is free!" thing for the setup build processes we launch
21:49 < dcoutts> then add build -j --semaphore
21:49 < dcoutts> and implement building components in parallel using the semaphore
21:50 < dcoutts> refold: is this one independent of the -j stuff https://github.com/23Skidoo/cabal/commit/4bb6940272b2ea93738e30cd914a899f3c0ca153 ?
21:51 < refold> dcoutts: looks like it.
21:52 < dcoutts> refold: so I think the "get one core min" is a good idea, but I'm not sure yet if we can easily integrate that with a JobControl/JobLimit style abstraction, I'll have to think about that
21:53 < dcoutts> may need to adjust it to use a job pool where there's always one active thread, and other threads get created if there's work and if they can grab the semaphore
21:53 < dcoutts> that will need a bit of thought to get a nice clean abstraction
21:53 < dcoutts> and a robust impl
21:53 < dcoutts> but I think that's worth doing
21:54 < dcoutts> refold: the current JobControl/JobLimit stuff is fairly non-invasive & comprehensible I think
21:54 < refold> dcoutts: yes, I like it too.
21:54 < dcoutts> it's easy to get into a mess with these things
21:55 < dcoutts> refold: if you like I can have a think about how to make a similar thing that lets us use the "get one core min" policy, and use an OS-sem for more cores
21:56 < dcoutts> refold: if you wanted to start now on building components (or better, prof etc) in parallel, you could do worse than to just reuse the existing JobControl from cabal-install
21:56 < dcoutts> it's likely we can make a similar abstraction using an OS-sem later
21:56 < dcoutts> and then port
21:57 < dcoutts> refold: but I think first things would be to try getting the existing cabal-install InstallPlan executor to use the OS-sem rather than the QSem for the JobLimit
21:57 < dcoutts> refold: sound like a plan?
21:58 < dcoutts> and have the last thing we do be to use ghc --make -j
21:58 < refold> dcoutts: ok, I'll do as you suggested: change JobLimit to use OS-sem and implement building multiple components in parallel.
21:58 < dcoutts> refold: ok, and I'll think about building a JobControl/JobLimit-like thing that can implement the "get one core min" policy
21:59 < dcoutts> to use inside setup build
21:59  * dcoutts disappears
21:59 < refold> dcoutts: Great. If you come up with something, you can just leave a comment at that pull request.
22:01 < dcoutts> refold: ah, one detail, you'll need to make OS-sem operations that do the simple case of "grab just 1"
22:01 < dcoutts> refold: but since we know we'll later need "try to grab 1 but don't block" then add those too
22:01 < dcoutts> should be a matter of breaking up the existing withSemaphore

23Skidoo commented Oct 31, 2013

I'm merging in the non-controversial parts (support for ghc --make -j in GhcOptions and build/test/...). I'll do a separate pull request for parallel install with --semaphore.

@23Skidoo 23Skidoo added a commit that referenced this pull request Oct 31, 2013

@23Skidoo 23Skidoo Merge pull request #1536 from 23Skidoo/ghc-parmake
Initial support for parallel 'ghc --make'

@23Skidoo 23Skidoo merged commit 4c7b7f8 into haskell:master Oct 31, 2013

@23Skidoo 23Skidoo deleted the 23Skidoo:ghc-parmake branch Oct 31, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment