You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been spending some time looking at what the slow bits are of a clean 1lab build:
Name
Count
Total
Average
Max
node
1,931
6m18s
0.20s
0.53s
pdflatex
258
4m16s
0.99s
1.80s
agda
1
4m04s
4m04s
4m04s
agda html
227
48.81s
0.22s
1.12s
git
383
12.58s
0.03s
0.13s
pdftocairo
258
9.54s
0.04s
0.22s
While there's some things which are always going to be slow (Agda), the obvious issue here is KaTeX. During a fresh build, we're spinning up 1.9k KaTeX processes, each of which runs for a fraction of a second!
It would be worth experimenting with creating some sort of build daemon here; just a basic JS program which reads an equation from stdin and writes the resulting SVG to stdout. Our Shake code can then maintain a pool of these, avoiding the cost of spawning a process for each job.
The text was updated successfully, but these errors were encountered:
SquidDev
added
the
web
For issues/pull requests relating to the 1lab website itself.
label
Jul 28, 2022
One other thing @plt-amy suggested is kicking off the KaTex and diagram builds earlier. We can create a separate job which parses Markdown and compiles the equations/diagrams, and start that at the same time as the main Agda build.
It won't reduce single-core time, but means we can benefit from parallelism earlier on, meaning we don't end up with a build graph like this:
I've been spending some time looking at what the slow bits are of a clean 1lab build:
While there's some things which are always going to be slow (Agda), the obvious issue here is KaTeX. During a fresh build, we're spinning up 1.9k KaTeX processes, each of which runs for a fraction of a second!
It would be worth experimenting with creating some sort of build daemon here; just a basic JS program which reads an equation from stdin and writes the resulting SVG to stdout. Our Shake code can then maintain a pool of these, avoiding the cost of spawning a process for each job.
The text was updated successfully, but these errors were encountered: