Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buildsystem: add multithreaded support #3288

Merged
merged 31 commits into from Feb 8, 2019

Conversation

@MilhouseVH
Copy link
Contributor

commented Feb 3, 2019

LibreELEC multi-threaded build system

  1. The default is now multithreaded (THREADCOUNT=100%).

    For legacy sequential builds, execute scripts/image or scripts/create_addon directly, or use the new make targets system-st, release-st, image-st, noobs-st and amlpkg-st.

  2. Add THREADCOUNT=# on the command line (or options) to determine the number of cores used during a multi-threaded build.

  3. THREADCOUNT=100% will use all available cores, THREADCOUNT=200% double the number of available cores, etc. THREADCOUNT=8 will use 8 cores.

  4. THREADCOUNT=0 or THREADCOUNT=1 will perform a "multi-threaded" sequential build, with a single unified log (see #18, below)

  5. THREADCOUNT=, or not defined (the current default), will perform a "legacy" sequential build.

  6. The ${SCRIPTS}/image_mt script will create the image and will be automatically invoked by ${SCRIPTS}/image when THREADCOUNT is defined. Long-term, ${SCRIPTS}/image_mt will replace ${SCRIPTS}/image.

  7. I ran PROJECT=RPi DEVICE=RPi2 ARCH=arm ${SCRIPTS}/create_addon all, which worked. The only addons that failed were docker and syncthing (due to go:host), and mono (RIP).

  8. I have compared the contents of SYSTEM from a legacy build with a multi-threaded build, and they are more-or-less identical - the only difference is to /usr/bin/libtool (missing the /lib path in multi-threaded builds, which shouldn't be an issue).

  9. I haven't tried to create an img.gz yet, although I don't see why it wouldn't work.

  10. pigz could be a drop-in replacement for gzip when creating images. I have a pigz package.mk if required - adding pigz:host to toolchain adds only a few seconds as it's a tiny package.

  11. Most of the testing has been with RPi and Generic projects, however Rockhip TinkerBoard and Amlogic KVIM have built OK with limited testing.

  12. Multi-threaded builds will use a "plan" (example, generated by /genbuildplan.py) that dictates what packages are to be built, and in approximately what order they should build.

  13. The "plan" is derived from the usual package.mk dependencies. The plan will build packages in an order that will be different to that used by the legacy build system. The left-to-right order of a dependency within a package.mk is important to the legacy build system, but not the multi-threaded build system - this kind of dependency information should be implemented using a dependency.

  14. There may be scope to tune "plan" generation - for instance, the default package order is fairly "naiive". Reordering packages may help reduce process "stalls" caused by dependency contention (or it may not help at all).

  15. When building an image, the plan will be based on the new image virtual package and not hard-coded packages as implemented by /image.

  16. To view the "plan" for a build:

PROJECT=RPi DEVICE=RPi2 ARCH=arm tools/viewplan

The plan is also available in ${BUILD}/.threads/plan during a build.

  1. To monitor progress during a multi-threaded build:
PROJECT=RPi DEVICE=RPi2 ARCH=arm tools/dashboard

Example:
s1

  1. Per-process (ie. per-package) log files will be created in ${BUILD}/.threads/logs and also output to stdout when THREADCOUNT > 1. When THREADCOUNT=1, a single log will be output only to stdout (just as we do today with the "legacy" build). However, this behaviour can be overriden with ONELOG=yes|no.

    THREADCOUNT=1/ONELOG=no could be useful if you prefer inidividual log files, while THREADCOUNT>1/ONELOG=yes is unlikely to be useful as the output from multiple processes/packages will be intermixed making faulure diagnosis harder to interpret (however, a single log file will incur reduced CPU overhead so might help shave time off the build if you're sure you won't experience a failure - and if you do, good luck!)

  2. Build history is written to ${BUILD}/.threads/history - this can be used to determine the order in which a package is built, which might be necessary if a dependency is missing and causing a failure, and can also be used for post-build analysis (proof-of-concept).

  3. During a build, the overall progress information will be output to stderr, while log detail will be output to stdout. Redirect stdout to a file in order to view only the progress information.

  4. The latest GNU parallel package is being installed to the toolchain as some distributions ship with unusable versions - for example Ubuntu 16.04 includes version 20141022 which has a bug that means --halt now,fail=1 is ignored, and --plus is also not supported.

Potential follow-up PRs/actions

  1. We should find a way to build initramfs via a dependency rather than directly building initramfs from within linux:target, so that the dependencies defined by initramfs can be scheduled as part of the plan and built appropriately - many of the initramfs dependencies can be built independently of linux:target which might save some time. This means solving a circular dependency which is why we currently build initramfs out of linux:target.

  2. Addons will currently be built by generating an individual plan for each add-on. This is not efficient when building many add-ons, particularly when building against an existing toolchain as many cores will be under utilised and the build is essentially single-threaded (as there is just the add-on to build, with few if any dependencies).

    It is possible to build a single plan for all addons that need to be built, which results in much more efficient/optimal scheduling during the build. However, if any package fails then the entire single-plan build will fail, although this could be solved eventually (probably).

  3. Stamp checking (during build and unpack) could be optimised, I think (although not 100% sure, just a gut feeling at this point). I don't think every process needs to check and re-check the stamp for a package that has already been unpacked/built by an earlier process. This would save some CPU and IO per process.

  4. Investigate why ccache is of no benefit to Generic builds, and disable ccache for Generic if necessary. Legacy cold/hot Generic builds show identical behaviour (ie. no benefit) so this is not a multithreaded issue. If ccache is of no benefit to Generic builds then it should be disabled to save on the wasted disk space and IO overhead.

    EDIT: Based on testing from @sky42 it appears this is specific to my two build environments as he does see a ccache benefit with Generic builds. Curioser and curioser!

  5. It might be possible to optimise the plan to reduce stalls.

Testing results

  1. Results below are from multiple RPi2/Generic/KVIM/TinkerBoard runs on 2 different servers:

    • 8-core Intel Haswell 4.0GHz Virtual Machine (Ubuntu 16.04)
    • 8-core AMD FX-8350 4.0GHz dedicated hardware (Ubuntu 17.10)
  2. The VM times are not entirely "stable", suggesting the VM host may have been active with other VMs (or other users active on the same VM) during the tests.

  3. The timings from the dedicated build server are more stable as the dedicated server was almost always left idle during tests.

  4. I have tested builds with different ccache states. "cold" is when the ccache does not exist at the start of the build, while "hot" means the ccache already exist (from a previous build) but all packages, stamps etc. have been removed fro the build directory. Maybe this is useful for some situations, but I would not recommend using a "hot" ccache for an official release.

  5. As an experiment I also tested with "32 cores" (even though only 8 cores existed) which generally produced improved results compared with 8 cores, probably due to more efficient/oppurtunistic scheduling.

  6. I have tried to "optimise" the plan by grouping packages with similar dependencies etc., but without any real success (not noticeably better than naiive order), so the current default is to generate plans with --no-reorder and use the naiive order.

  7. I have compared building all addons (all -docker -syncthing -mono) for RPi2 (173 addons, list) using the legacy and multi-threaded approaches (8 cores), both with individual plans ("multi-plan") and a single plan. Timings are for the build only, and do not include post-build addon() processing (as not relevant - it should be the same for all three). The same "kodi" toolchain is used as the basis for all three tests (ie. build image, then build/time the addons).

    The "multi-plan" approach is rather inefficient for two reasons:

    • All 870 packages are sourced to generate the plan for each addon - this means sourcing 870 packages 173 times (for RPi2). This could be alleviated by caching the JSON package data so that the packages are sourced only once.
    • Many packages will have been built by/for an earlier add-on, often reducing the plan for an add-on to a single thread/process, which is to say that per-addon plans often have limited parallelism.

    An example script that builds all addons using a single plan can be seen here: pastebin

    The integrity of the created addons from multi-threaded builds has not been analysed, so more work may be required in this area depending on which avenue people want to pursue.


Intel Virtual Machine

RPi2, Generic results

Runs #1-#4 are with different optimisation/reordering strategies.

Runs #5-#6 are with --no-reorder, as re-ordering seems ineffective. This is now the default.

KVIM, TinkerBoard results

All runs with --no-reorder

1. RPi2, cold, legacy (sequential):
run# 1 2 3 4 5 6 AVG
real 84m57 87m35 85m12 86m13 n/a 84m29 85m41
user 223m20 223m04 223m58 224m03 n/a 224m38 223m48
sys 27m14 26m53 27m18 27m28 n/a 27m12 27m13
2. RPi2, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 4 5 6 AVG
real 87m59 88m00 88m22 87m27 86m27 87m33 87m38
user 223m41 223m04 224m29 224m19 223m43 224m35 223m58
sys 27m22 26m55 27m18 27m16 27m24 27m27 27m17
3. RPi2, cold, THREADCOUNT=8:
run# 1 2 3 4 5 6 AVG
real 57m35 47m18 57m08 54m24 48m57 50m01 52m33
user 225m54 226m00 230m12 227m25 227m15 228m27 227m32
sys 27m21 26m54 27m04 27m14 27m10 27m20 27m10
4. RPi2, hot, THREADCOUNT=8:
run# 1 2 3 4 5 6 AVG
real 21m35 18m57 22m04 19m55 19m04 19m11 20m07
user 56m40 55m50 56m08 57m19 54m45 55m07 55m58
sys 9m15 8m55 8m49 9m29 8m37 8m51 8m59
5. RPi2, cold, THREADCOUNT=32:
run# 1 2 3 4 5 6 AVG
real 46m11 45m29 51m47 45m38 45m10 46m35 46m48
user 228m53 229m28 232m45 231m02 230m53 234m44 231m17
sys 27m10 27m03 27m10 27m20 27m12 27m49 27m17
6. RPi2, hot, THREADCOUNT=32:
run# 1 2 3 4 5 6 AVG
real 24m01 23m07 28m44 18m09 17m51 17m59 21m38
user 94m30 91m25 89m25 55m56 55m24 57m02 73m57
sys 15m02 14m31 14m02 8m48 8m51 8m58 11m42
7. Generic, cold, legacy (sequential):
run# 1 2 3 4 5 6 AVG
real 127m17 129m56 129m30 133m23 128m13 136m34 130m48
user 425m50 426m52 428m13 431m12 429m10 432m17 428m55
sys 44m03 43m30 44m51 45m09 44m39 44m52 44m30
8. Generic, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 4 5 6 AVG
real 136m27 138m19 130m51 147m53 129m27 140m18 137m12
user 426m29 430m19 428m25 435m06 427m10 429m57 429m34
sys 44m19 44m08 44m01 44m34 44m01 43m53 44m09
9. Generic, cold, THREADCOUNT=8:
run# 1 2 3 4 5 6 AVG
real 84m53 87m35 79m47 98m26 85m19 83m34 86m35
user 432m30 438m02 437m24 446m09 433m00 432m34 436m36
sys 44m02 44m24 44m24 44m47 43m57 44m14 44m18
10. Generic, hot, THREADCOUNT=8:
run# 1 2 3 4 5 6 AVG
real 107m52 80m32 79m35 81m19 84m21 97m14 88m28
user 431m55 437m50 437m09 437m56 432m02 436m37 435m34
sys 44m13 44m50 44m28 44m42 44m21 44m22 44m29
11. Generic, cold, THREADCOUNT=32:
run# 1 2 3 4 5 6 AVG
real 74m08 83m30 74m58 74m56 73m35 87m04 78m01
user 441m36 452m44 446m26 445m10 444m59 449m50 446m47
sys 44m11 45m09 44m28 44m18 44m20 44m27 44m28
12. Generic, hot, THREADCOUNT=32:
run# 1 2 3 4 5 6 AVG
real 74m56 74m37 59m39 75m05 74m52 103m29 77m06
user 442m28 446m27 335m28 445m03 445m52 458m06 428m54
sys 44m34 44m52 35m38 44m34 44m40 45m22 43m16
13. KVIM, cold, legacy (sequential):
run# 1 2 3 AVG
real 81m10 99m53 80m18 87m07
user 200m32 261m04 199m30 220m22
sys 24m48 41m07 24m43 30m12
14. KVIM, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 AVG
real 83m57 111m07 82m23 92m29
user 200m43 279m55 199m59 226m52
sys 24m42 50m15 24m51 33m16
15. KVIM, cold, THREADCOUNT=8:
run# 1 2 3 AVG
real 46m17 45m19 45m21 45m39
user 203m31 203m46 203m14 203m30
sys 24m50 24m47 24m51 24m49
16. KVIM, hot, THREADCOUNT=8:
run# 1 2 3 AVG
real 23m54 23m28 24m15 23m52
user 76m15 77m30 81m21 78m22
sys 11m08 11m18 11m56 11m27
17. KVIM, cold, THREADCOUNT=32:
run# 1 2 3 AVG
real 40m45 40m47 40m39 40m43
user 205m50 206m48 205m40 206m06
sys 24m37 24m38 24m46 24m40
18. KVIM, hot, THREADCOUNT=32:
run# 1 2 3 AVG
real 21m12 21m14 21m09 21m11
user 78m04 78m35 78m44 78m27
sys 11m06 11m10 11m15 11m10
19. TinkerBoard, cold, legacy (sequential):
run# 1 2 3 AVG
real 89m57 84m14 85m02 86m24
user 219m28 217m04 219m36 218m42
sys 27m20 26m24 27m27 27m03
20. TinkerBoard, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 AVG
real 87m06 88m03 86m53 87m20
user 218m50 217m29 219m51 218m43
sys 26m51 26m34 27m09 26m51
21. TinkerBoard, cold, THREADCOUNT=8:
run# 1 2 3 AVG
real 52m26 50m30 50m01 50m59
user 221m41 220m44 222m43 221m42
sys 26m47 26m37 27m11 26m51
22. TinkerBoard, hot, THREADCOUNT=8:
run# 1 2 3 AVG
real 34m56 36m17 34m25 35m12
user 137m07 140m35 140m22 139m21
sys 18m13 18m49 18m51 18m37
23. TinkerBoard, cold, THREADCOUNT=32:
run# 1 2 3 AVG
real 44m00 43m51 44m41 44m10
user 224m11 224m21 225m43 224m45
sys 26m39 26m46 27m04 26m49
24. TinkerBoard, hot, THREADCOUNT=32:
run# 1 2 3 AVG
real 31m37 30m40 32m05 31m27
user 146m15 140m02 149m32 145m16
sys 18m59 18m16 19m36 18m57

AMD dedicated server

RPi2, Generic results

Runs #1-#2 are with different optimisation/reordering strategies.

Runs #3-#4 are with --no-reorder, as re-ordering seems ineffective. This is now the default.

KVIM, TinkerBoard results

All runs with --no-reorder

25. RPi2, cold, legacy (sequential):
run# 1 2 3 4 AVG
real 120m39 116m20 116m19 116m04 117m20
user 382m51 382m43 382m35 382m26 382m38
sys 50m02 49m59 50m02 50m05 50m02
26. RPi2, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 4 AVG
real 119m37 119m50 119m10 118m13 119m12
user 384m38 384m44 383m52 383m14 384m07
sys 50m27 51m04 51m00 50m09 50m40
27. RPi2, cold, THREADCOUNT=8:
run# 1 2 3 4 AVG
real 71m43 72m10 71m54 73m20 72m16
user 398m14 397m51 397m42 396m47 397m38
sys 51m58 52m43 52m05 51m54 52m10
28. RPi2, hot, THREADCOUNT=8:
run# 1 2 3 4 AVG
real 27m53 27m44 28m00 27m35 27m48
user 115m38 111m43 111m49 112m07 112m49
sys 22m38 22m18 22m49 22m12 22m29
29. RPi2, cold, THREADCOUNT=32:
run# 1 2 3 4 AVG
real 69m25 69m24 69m06 68m50 69m11
user 404m36 404m41 404m22 404m17 404m29
sys 51m30 52m05 51m37 53m07 52m04
30. RPi2, hot, THREADCOUNT=32:
run# 1 2 3 4 AVG
real 35m29 36m22 26m54 26m17 31m15
user 181m58 183m55 118m44 112m54 149m22
sys 33m13 33m52 23m02 23m00 28m16
31. Generic, cold, legacy (sequential):
run# 1 2 3 4 AVG
real 179m05 174m49 174m22 174m08 175m36
user 703m07 702m47 702m48 702m39 702m50
sys 74m40 74m52 75m00 74m46 74m49
32. Generic, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 4 AVG
real 178m38 178m37 177m21 176m49 177m51
user 705m52 705m54 704m44 703m29 704m59
sys 74m53 74m57 75m03 74m50 74m55
33. Generic, cold, THREADCOUNT=8:
run# 1 2 3 4 AVG
real 117m14 117m33 117m25 122m23 118m38
user 731m21 731m11 731m01 722m19 728m58
sys 75m11 75m18 75m24 75m51 75m26
34. Generic, hot, THREADCOUNT=8:
run# 1 2 3 4 AVG
real 117m27 117m27 117m43 122m21 118m44
user 731m10 731m09 730m48 721m54 728m45
sys 75m32 75m23 76m19 75m55 75m47
35. Generic, cold, THREADCOUNT=32:
run# 1 2 3 4 AVG
real 113m52 113m23 113m23 113m53 113m37
user 743m52 744m07 742m20 738m48 742m16
sys 74m33 74m41 74m43 75m07 74m46
36. Generic, hot, THREADCOUNT=32:
run# 1 2 3 4 AVG
real 113m47 113m43 113m27 113m44 113m40
user 743m52 743m34 742m15 738m50 742m07
sys 75m12 74m52 74m46 75m01 74m57
37. KVIM, cold, legacy (sequential):
run# 1 2 3 AVG
real 110m31 110m31 110m29 110m30
user 342m50 342m29 342m51 342m43
sys 46m02 46m12 45m59 46m04
38. KVIM, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 AVG
real 113m04 112m49 112m46 112m53
user 343m39 343m37 343m32 343m36
sys 46m02 46m19 46m15 46m12
39. KVIM, cold, THREADCOUNT=8:
run# 1 2 3 AVG
real 67m53 67m57 67m25 67m45
user 357m01 356m33 356m55 356m49
sys 47m43 47m47 47m46 47m45
40. KVIM, hot, THREADCOUNT=8:
run# 1 2 3 AVG
real 34m08 35m10 32m46 34m01
user 153m53 154m36 146m10 151m33
sys 26m13 26m32 25m13 25m59
41. KVIM, cold, THREADCOUNT=32:
run# 1 2 3 AVG
real 63m32 63m39 64m09 63m46
user 364m06 363m50 364m03 363m59
sys 47m31 47m43 48m10 47m48
42. KVIM, hot, THREADCOUNT=32:
run# 1 2 3 AVG
real 31m35 32m52 31m42 32m03
user 151m25 152m46 153m33 152m34
sys 25m21 26m11 25m39 25m43
43. TinkerBoard, cold, legacy (sequential):
run# 1 2 3 AVG
real 117m19 116m23 116m22 116m41
user 374m45 374m31 374m41 374m39
sys 50m24 50m24 50m07 50m18
44. TinkerBoard, cold, THREADCOUNT=1 (sequential):
run# 1 2 3 AVG
real 118m29 118m27 118m32 118m29
user 375m28 375m24 375m27 375m26
sys 50m22 50m29 50m20 50m23
45. TinkerBoard, cold, THREADCOUNT=8:
run# 1 2 3 AVG
real 73m31 73m52 73m30 73m37
user 389m22 389m22 389m32 389m25
sys 51m36 51m36 51m35 51m35
46. TinkerBoard, hot, THREADCOUNT=8:
run# 1 2 3 AVG
real 52m48 57m47 53m50 54m48
user 268m26 301m00 279m17 282m54
sys 40m03 43m58 41m32 41m51
47. TinkerBoard, cold, THREADCOUNT=32:
run# 1 2 3 AVG
real 68m28 68m19 68m14 68m20
user 397m22 397m09 397m26 397m19
sys 51m38 51m50 51m43 51m43
48. TinkerBoard, hot, THREADCOUNT=32:
run# 1 2 3 AVG
real 54m04 51m30 54m56 53m30
user 302m51 290m28 312m18 301m52
sys 43m22 42m10 43m56 43m09

Intel Virtual Machine

LibreELEC S905 add-on results

Add-ons selected for build: all -docker -syncthing -mono.

All multi-threaded runs with 8 cores.

Plan is with --no-reorder, building 578 packages,, after an image build (existing toolchain etc.).

49. LibreELEC S905, hot, addons (all -docker -syncthing -mono):
run legacy multi-plan single-plan
real 85m55 84m38 46m04
user 285m33 295m36 301m18
sys 26m47 29m10 28m40
@MilhouseVH

This comment has been minimized.

Copy link
Contributor Author

commented Feb 3, 2019

More testing results from @sky42src:

i7-8086K @ 4.7 GHz AVX on all  cores on Ubuntu 16.04.5 Server with Git 720a3a4 + MT patch 20190131

Generic.x86_64 threads=18
  CCACHE=off    CPU_990%    Time_44:20.79
  CCACHE=cold   CPU_1000%   Time_49:55.76
  CCACHE=hot    CPU_784%    Time_19:01.50
Generic.x86_64 legacy
  CCACHE=off    CPU_573%    Time_1:11:43
  CCACHE=cold   CPU_593%    Time_1:19:06
  CCACHE=hot    CPU_315%    Time_38:32.14

KVIM.arm threads=18
  CCACHE=off    CPU_862%    Time_25:19.52
  CCACHE=cold   CPU_868%    Time_28:11.74
  CCACHE=hot    CPU_723%    Time_13:45.59
KVIM.arm legacy
  CCACHE=off    CPU_434%    Time_46:23.33
  CCACHE=cold   CPU_445%    Time_50:48.60
  CCACHE=hot    CPU_331%    Time_26:59.04

LePotato.arm threads=18
  CCACHE=off    CPU_871%    Time_25:09.91
  CCACHE=cold   CPU_881%    Time_27:48.17
  CCACHE=hot    CPU_731%    Time_13:33.03
LePotato.arm legacy
  CCACHE=off    CPU_434%    Time_46:24.27
  CCACHE=cold   CPU_446%    Time_50:48.55
  CCACHE=hot    CPU_330%    Time_26:58.74

Odroid_C2.arm threads=18
  CCACHE=off    CPU_871%    Time_25:09.69
  CCACHE=cold   CPU_880%    Time_27:53.65
  CCACHE=hot    CPU_724%    Time_13:43.33
Odroid_C2.arm legacy
  CCACHE=off    CPU_434%    Time_46:20.04
  CCACHE=cold   CPU_447%    Time_50:35.75
  CCACHE=hot    CPU_331%    Time_26:53.66

RK3328.arm threads=18
  CCACHE=off    CPU_885%    Time_26:58.07
  CCACHE=cold   CPU_911%    Time_29:48.38
  CCACHE=hot    CPU_768%    Time_15:41.25
RK3328.arm legacy
  CCACHE=off    CPU_454%    Time_48:35.85
  CCACHE=cold   CPU_471%    Time_53:22.85
  CCACHE=hot    CPU_378%    Time_29:10.96

RK3399.arm threads=18
  CCACHE=off    CPU_887%    Time_27:36.70
  CCACHE=cold   CPU_907%    Time_30:41.78
  CCACHE=hot    CPU_783%    Time_16:09.87
RK3399.arm legacy
  CCACHE=off    CPU_459%    Time_49:16.36
  CCACHE=cold   CPU_476%    Time_54:14.17
  CCACHE=hot    CPU_392%    Time_29:42.89

RPi2.arm threads=18
  CCACHE=off    CPU_895%    Time_26:43.22
  CCACHE=cold   CPU_911%    Time_30:06.42
  CCACHE=hot    CPU_620%    Time_11:40.51
RPi2.arm legacy
  CCACHE=off    CPU_461%    Time_48:04.74
  CCACHE=cold   CPU_478%    Time_53:15.30
  CCACHE=hot    CPU_264%    Time_24:50.74

RPi.arm threads=18
  CCACHE=off    CPU_891%    Time_26:51.86
  CCACHE=cold   CPU_911%    Time_30:01.53
  CCACHE=hot    CPU_620%    Time_11:43.45
RPi.arm legacy
  CCACHE=off    CPU_461%    Time_48:07.47
  CCACHE=cold   CPU_479%    Time_53:08.83
  CCACHE=hot    CPU_265%    Time_24:52.69

S905.arm threads=18
  CCACHE=off    CPU_842%    Time_26:50.14
  CCACHE=cold   CPU_852%    Time_29:55.63
  CCACHE=hot    CPU_709%    Time_15:14.30
S905.arm legacy
  CCACHE=off    CPU_442%    Time_47:10.68
  CCACHE=cold   CPU_458%    Time_51:45.30
  CCACHE=hot    CPU_354%    Time_28:03.52

TinkerBoard.arm threads=18
  CCACHE=off    CPU_880%    Time_27:01.39
  CCACHE=cold   CPU_902%    Time_29:53.13
  CCACHE=hot    CPU_767%    Time_15:06.22
TinkerBoard.arm legacy
  CCACHE=off    CPU_450%    Time_48:42.93
  CCACHE=cold   CPU_467%    Time_53:21.69
  CCACHE=hot    CPU_370%    Time_28:56.36

WeTek_Play_2.arm threads=18
  CCACHE=off    CPU_858%    Time_25:25.42
  CCACHE=cold   CPU_866%    Time_28:13.33
  CCACHE=hot    CPU_730%    Time_13:38.02
WeTek_Play_2.arm legacy
  CCACHE=off    CPU_434%    Time_46:19.36
  CCACHE=cold   CPU_447%    Time_50:38.70
  CCACHE=hot    CPU_332%    Time_26:53.09
@MilhouseVH MilhouseVH changed the title buildsystem: add multithreaded option buildsystem: add multithreaded support Feb 3, 2019
@lrusak

This comment has been minimized.

Copy link
Member

commented Feb 3, 2019

Wooooohooooo! Way to go @MilhouseVH! I knew you could do it 😉

@MilhouseVH MilhouseVH force-pushed the MilhouseVH:le10_mt branch from ac2ab9f to e0b7c36 Feb 3, 2019
@arthur-liberman

This comment has been minimized.

Copy link
Contributor

commented Feb 3, 2019

Congrats! Very nice to see LibreELEC finally implement a multi-threaded build system.
But it would have been nice to see a mention of where you got the idea from ;)

@CvH CvH added the LE 9.2 label Feb 3, 2019
@MilhouseVH MilhouseVH force-pushed the MilhouseVH:le10_mt branch 11 times, most recently from 4830e5a to 3e89a2f Feb 3, 2019
@MilhouseVH

This comment has been minimized.

Copy link
Contributor Author

commented Feb 6, 2019

Updated:

  1. Multi-threaded builds are now the default, with THREADCOUNT=100% being the default.

    Additional single-thread Makefile targets have been added to permit legacy builds if required (ultimately we will remove the single thread procedure at some point).

  2. Added scripts/create_addon_mt which will perform a single-plan addon build. The multi-threaded build will now continue building packages regardless of failure, and report failed add-ons at the end (with log details if a per-package log is available).

  3. tools/mtstats.py now includes a breakdown of slot concurrency.

Pending review of scripts/create_addon_mt I'd like to propose that we rename as follows:

      scripts/create_addon -> scripts/create_addon_st
      scripts/create_addon_mt -> scripts/create_addon

      scripts/image -> scripts/image_st
      scripts/image_mt -> scripts/image

and then at some point in the not too distant future we will drop the *-st variants.

@MilhouseVH MilhouseVH force-pushed the MilhouseVH:le10_mt branch 4 times, most recently from 84fd462 to dd5861a Feb 6, 2019
@MilhouseVH

This comment has been minimized.

Copy link
Contributor Author

commented Feb 7, 2019

Pushed hopefully the final commit which renames the legacy scripts to scripts/image_st and scripts/create_addon_st. The default build is now fully multithreaded (THREADCOUNT=100%).

@MilhouseVH MilhouseVH force-pushed the MilhouseVH:le10_mt branch 3 times, most recently from f2d86c0 to 0062065 Feb 7, 2019
MilhouseVH added 25 commits Feb 8, 2019
…all race condition

Both packages update the same fonts.dir, but font-xfree86-type1 trashes it so build this first.
If the source package changes then we need to rebuild too.
…ucible build)

"libX11 and xrandr to read the sink's EDID, used to determine the PC's HDMI physical address"
When building lirc after alsa-utils, the following unwanted alsa libraries are built by lirc:

NEW FILE       Delta: 10,536       devel-20190115185543-5767941: 10,536        devel-20190115133317-5767941: n/a          /usr/lib/lirc/plugins/alsa_usb.so
NEW FILE       Delta: 19,176       devel-20190115185543-5767941: 19,176        devel-20190115133317-5767941: n/a          /usr/lib/lirc/plugins/audio_alsa.so
Avoids trashing $TOOLCHAIN/lib/python2.7/site-packages/easy-install.pth
when installing Python host packages (distutilscross:host, setuptools:host,
MarkupSafe:host etc.).
emby and emby4 both unzip into ${BUILD}/system, which is fun when
both add-ons are being unpacked concurrently.
@MilhouseVH MilhouseVH force-pushed the MilhouseVH:le10_mt branch from 0062065 to 98c0210 Feb 8, 2019
@jernejsk jernejsk merged commit e56a92b into LibreELEC:master Feb 8, 2019
@Ray-future

This comment has been minimized.

Copy link
Contributor

commented on scripts/image_mt in 0ebc6fe Feb 12, 2019

I'm going to ask here because it's more appropriate. License was changed for GPL-2.0-or-later to a more restrictive one GPL-2.0.
According to the diff of image and image_st these files are almost identical which looks like image_mt (now image) is based on the GPL-2.0-or-later image (now image_st)
https://paste.ubuntu.com/p/VYH47gSFBg/

This comment has been minimized.

Copy link
Contributor Author

replied Feb 12, 2019

image_st is the original file with the original licence unchanged. image_mt (now image) is a new incarnation with significant changes published with a new GPLv2-only licence.

This comment has been minimized.

Copy link
Contributor

replied Feb 12, 2019

You can't sell me significant changes. This is BS. Look at the diff.

This comment has been minimized.

Copy link
Contributor

replied Feb 12, 2019

Soiler alert: 90% of the diff is just changes of comments where the first letter is not capitalized

This comment has been minimized.

Copy link
Contributor Author

replied Feb 12, 2019

I have restored the original licence and copyright to end this discussion. Now please do likewise and respect the GPLv2-only licence of the new code which cannot be used in a GPLv3 project.

This comment has been minimized.

Copy link
Contributor

replied Feb 12, 2019

Yes we will investigate the code further since you ripped of the idea from @arthur-liberman, who knows what was copied there ;). We are also fine to just drop your changes and use Art's as it builds faster.

This comment has been minimized.

Copy link
Contributor

replied Feb 12, 2019

Ahh and FYI. We changed to GPLv3 because we use fakeroot which is GPLv3. Therefore the whole project needs to be GPLv3. I've seen that you use that too:
https://github.com/LibreELEC/LibreELEC.tv/blob/master/packages/devel/fakeroot/package.mk#L11

This comment has been minimized.

Copy link
Contributor Author

replied Feb 12, 2019

who knows what was copied there

Spoiler alert: nothing, absolutely nothing, as it's a completely different approach which would be clear if you read the code. Believe it or not, I've not even studied your solution in any detail whatsoever, I only read enough to know it wasn't suitable for LibreELEC.

We are also fine to just drop your changes and use Art's as it builds faster.

Great, and good luck with that.

This comment has been minimized.

Copy link
Contributor Author

replied Feb 12, 2019

Ahh and FYI. We changed to GPLv3 because we use fakeroot which is GPLv3. Therefore the whole project needs to be GPLv3. I've seen that you use that too:

Fakeroot is a separate program. You can mix GPLv3 and GPLv2 to build an operating system, the Linux Kernel is GPLv2, but you can't mix GPLv2 and GPLv3 within the same program which is what the build system is.

https://www.gnu.org/licenses/rms-why-gplv3.en.html (third & fourth paragraphs).

This comment has been minimized.

Copy link
Contributor

replied Feb 12, 2019

Lol when you go like that about it I wouldn't even say the Buildsystem is "one" program. Anyway I don't wanna go into endless discussions with you.

Have a nice day :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.