Millions of calls to: "mmdc -i xx.md -o xx.pdf" #4806
-
Currently on windows, I need to convert millions of "mermaid" files to "pdf" and it is a slow process. Some alternative? |
Beta Was this translation helpful? Give feedback.
Answered by
aloisklink
Sep 19, 2023
Replies: 1 comment 2 replies
-
The slow part about I'd recommend writing your own script in Node.JS using the import { readFile } from 'node:fs/promises';
import { renderMermaid } from "@mermaid-js/mermaid-cli";
import puppeteer from 'puppeteer';
import pLimit from 'p-limit';
// depending on how much RAM/CPU you have, you can increase this number to get faster performance
const limit = pLimit(4);
const inputFiles = []; // your input files (can use https://www.npmjs.com/package/glob to find these)
const browser = await puppeteer.launch();
try {
await inputFiles.map((inputFile) => limit(() => {
console.log(`Working on file ${inputFile}`);
const definition = await readFile(inputFile, {encoding: "utf8"});
const { data } = await renderMermaid(browser, definition, "pdf", {});
const outputFile = inputFile + ".pdf";
console.log(`Writing file ${inputFile} to ${outputFile}`);
await fs.promises.writeFile(output, data);
}));
} finally {
await browser.close();
} |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
No worries :) I have a package called
remark-mermaid-dataurl
that converts thousands of mermaid diagrams into SVGs in markdown diagrams, so I basically did the same thing!By the way, please feel free to press the Mark as answer button on my comment once you confirm that it's working!
It might make sense to officially add this feature to the
@mermaid-js/mermaid-cli
project! But considering how hard it would be to handle the list of input files/output files/number of cores/jobs, maybe it's easier to just leave it for users to adapt the script above.