Benchmark of popular Markdown parsers in Node.js using tinybench.
| Parser | Output |
|---|---|
| commonmark | AST |
| ironmark | AST JSON |
| markdown-it | Token array |
| litemarkup | AST |
| Parser | Output |
|---|---|
| commonmark | HTML string |
| ironmark | HTML string |
| markdown-it | HTML string |
| markdown-wasm | HTML string |
| marked | HTML string |
| micromark | HTML string |
| snarkdown | HTML string |
| litemarkup | HTML string |
npm install
npm run benchmarkThe benchmark input corpus comes from markdown-dataset. This script loads base64-encoded markdown documents from that package and concatenates them into one input string.
Tune benchmark parameters in BENCHMARK_CONFIG at the top of benchmark.js:
filesnumber of markdown documents to use (Infinity= all)roundsnumber of benchmark roundstimeMsmeasurement time per task per roundwarmupMswarmup time per task per roundgcBetweenRoundscallglobal.gc()between rounds (requiresnode --expose-gc)
If gcBetweenRounds: true, run with:
node --expose-gc benchmark.jsThe benchmark runs two separate suites:
- AST/token parsing: Measure throughput of parsers that produce structured output (AST or token arrays).
- Parsing + HTML rendering: Measure throughput of parsers that render directly to HTML strings.
Both suites benchmark the same input corpus and with the same metrics, but are reported separately to avoid mixing incomparable output types.
Some libraries appears in both suites using different configurations: AST parsing in suite 1, HTML rendering in suite 2.
Code in this project has been heavily written with AI tools, reviewed and edited by a human.
MIT