This repository serves as the high-performance proving ground for ParserNG, benchmarking it against the industry's most prominent Java math expression parsers. We focus on nanosecond-level efficiency, memory overhead, and structural resilience to ensure ParserNG remains the definitive choice for high-performance backend infrastructure
The codes for each benchmark vs ParserNG are located in
package com.github.gbenroscience.parser.wars.individual;You may run the codes for yourself. There are many expressions also in class
com.github.gbenroscience.parser.wars.individual.ParserNGWarsAll you have to do is to change the index in
com.github.gbenroscience.parser.wars.individual.ParserNGWarsto point to the relevant expression in the array,
ParserNGWars.EXPRESSIONSOnce this is set, all the benchmarking classes in package
package com.github.gbenroscience.parser.wars.individual;will pick up that expression and run with it.
-
exp4j - (interpreted) Popular interpreted parser
-
Janino(full blown Java compiler!) - We compare against two distinct tiers of the Janino bytecode compiler:
- BaseJanino: Utilizes standard library APIs to compile expressions into Java classes at runtime. It is flexible and easy to implement but retains minor overhead from internal reflection and generic execution paths
- FieryJanino: The absolute performance ceiling for Janino. It compiles expressions directly into a user-defined Java interface, allowing for direct method calls that eliminate nearly all invocation overhead
-
ParserNG - Standard Mode - (interpreted)
- ParserNg Turbo(Array Based) - (interpreted) Our proprietary optimization engine that achieves compiler-grade speeds while remaining entirely zero-allocation. Variables are passed via an array
- ParserNG Turbo(Widening Based): - (interpreted) Our proprietary optimization engine that achieves compiler-grade speeds while remaining entirely zero-allocation. Variables are passed using a widening approach
-
mXparser(interpreted) Feature-rich, heavyweight
-
Parsii(interpreted) Lightweight interpreter
-
Paralithic (Byte-Compiled; uses ASM) Strong bytecode contender
-
Native Java(expression hard-coded as a Java statement) Hand-written baseline
*These results reflect the current state of ParserNG 1.0.7+ The battles are fought over the expression:
sin(sqrt(x^2 + y^2))Lower scores indicate higher speed (ns/op).
Environment: JDK 24.0.1 • JMH 1.37 • Average Time (ns/op)
| Library / Engine | Score (ns/op) | vs Fastest | Alloc Rate | Bytes/Op | Verdict |
|---|---|---|---|---|---|
| Native Java (hand-written) | 55.04 | 1.0× | 0.013 MB/s | ~0 | Theoretical floor |
| FieryJanino (direct) | 55.14 | 1.00× | 0.013 MB/s | 0.001 | Current speed king |
| Paralithic | 55.65 | 1.01× | 0.013 MB/s | 0.001 | Excellent bytecode |
| ParserNG Turbo | 59.9 – 64.7 | 1.09x–1.18× | 0.013 MB/s | 0.001 | Best overall |
| BaseJanino | 68.39 | 1.24× | 1005 MB/s | 72 | High GC pressure |
| Parsii | 91.64 | 1.66× | 0.013 MB/s | 0.001 | Decent lightweight |
| exp4j | 221.02 | 4.01× | 657 MB/s | 152 | Heavy allocation |
| mXparser | 4322 | 78.52× | 531 MB/s | 2402 | Not competitive |
ParserNG Turbo is the dominant king of all Java math interpreted parsers — delivering near-bytecode speeds with the safety, flexibility, and compatibility of a pure interpreter.
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 19.97 | ±0.13 | 0.013 MB/s | ~0 | Variable setup + invocation overhead |
| ParserNG Turbo (Widening-based) | 62.68 | ±1.26 | 0.013 MB/s | 0.001 | Excellent scalar performance |
| ParserNG Turbo (Array-based) | 63.62 | ±2.83 | 0.013 MB/s | 0.001 | Highest scalability |
| exp4j | 221.02 | ±16.87 | 656.97 MB/s | 152.00 | High allocation & GC pressure |
| ParserNG (Standard) | 100.45 | ±2.44 | 0.013 MB/s | 0.001 | Regular ParserNG engine |
JDK 24 • JMH 1.37 • Average Time (ns/op)
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 20.69 | ±0.65 | 0.013 MB/s | ~0 | Variable setup + invocation overhead |
| FieryJanino (direct interface) | 55.14 | ±0.73 | 0.013 MB/s | 0.001 | Fastest Janino variant |
| ParserNG Turbo (Array-based) | 60.64 | ±0.45 | 0.013 MB/s | 0.001 | Highest scalability |
| ParserNG Turbo (Widening-based) | 61.50 | ±3.02 | 0.013 MB/s | 0.001 | Excellent scalar performance |
| ParserNG (Standard) | 98.43 | ±4.30 | 0.013 MB/s | 0.001 | Regular ParserNG engine |
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 20.72 | ±1.23 | 0.013 MB/s | ~0 | Variable setup + invocation overhead |
| ParserNG Turbo (Widening-based) | 60.67 | ±0.30 | 0.013 MB/s | 0.001 | Best scalar performance |
| ParserNG Turbo (Array-based) | 60.77 | ±0.34 | 0.013 MB/s | 0.001 | Highest scalability |
| BaseJanino (normal) | 68.39 | ±4.83 | 1005.5 MB/s | 72.00 | High allocation & GC pressure |
| ParserNG (Standard) | 95.84 | ±1.53 | 0.013 MB/s | 0.001 | Default ParserNG mode |
ParserNG Turbo is ~12% faster than BaseJanino while using ~70,000× less memory per operation and producing zero GC events.
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 20.42 | ±0.38 | 0.013 MB/s | ~0 | Variable setup + invocation overhead |
| ParserNG Turbo (Widening-based) | 62.96 | ±0.28 | 0.013 MB/s | 0.001 | Excellent scalar performance |
| ParserNG Turbo (Array-based) | 63.24 | ±4.47 | 0.013 MB/s | 0.001 | Highest scalability |
| mXparser | 4321.99 | ±362.23 | 531.06 MB/s | 2401.60 | Extremely high allocation |
| ParserNG (Standard) | 103.06 | ±5.14 | 0.013 MB/s | 0.001 | Regular ParserNG engine |
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 20.21 | ±0.52 | 0.013 MB/s | ~0 | Variable setup + invocation overhead |
| ParserNG Turbo (Array-based) | 63.08 | ±0.68 | 0.013 MB/s | 0.001 | Highest scalability |
| ParserNG Turbo (Widening-based) | 63.61 | ±4.17 | 0.013 MB/s | 0.001 | Excellent scalar performance |
| Parsii | 91.64 | ±45.54 | 0.013 MB/s | 0.001 | Lightweight interpreter |
| ParserNG (Standard) | 104.73 | ±7.87 | 0.013 MB/s | 0.001 | Regular ParserNG engine |
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 20.27 | ±0.11 | 0.013 MB/s | ~0 | Variable setup + invocation overhead |
| Paralithic | 55.65 | ±2.10 | 0.013 MB/s | 0.001 | Strong bytecode competitor |
| ParserNG Turbo (Widening-based) | 59.91 | ±0.82 | 0.013 MB/s | 0.001 | Excellent scalar performance |
| ParserNG Turbo (Array-based) | 60.52 | ±0.64 | 0.013 MB/s | 0.001 | Highest scalability |
| ParserNG (Standard) | 103.13 | ±4.65 | 0.013 MB/s | 0.001 | Regular ParserNG engine |
Benchmark Mode: Average time (avgt) • JDK 24.0.1 • JMH 1.37
| Benchmark | Score (ns/op) ↓ | ± Error | Alloc Rate | Bytes/Op | Notes |
|---|---|---|---|---|---|
| Benchmark Baseline | 20.40 | ±0.66 | 0.003 MB/s | ~0 | Variable setup + invocation overhead |
| Native Java (hand-written) | 55.04 | ±2.07 | 0.003 MB/s | ~0 | Pure Java baseline (no parser) |
| ParserNG Turbo (Array-based) | 64.25 | ±3.16 | 0.003 MB/s | ~0 | Highest scalability |
| ParserNG Turbo (Widening-based) | 64.65 | ±5.71 | 0.003 MB/s | ~0 | Excellent scalar performance |
| ParserNG (Standard) | 99.59 | ±8.91 | 0.003 MB/s | ~0 | Regular ParserNG engine |
- ParserNG Turbo consistently delivers near-top-tier performance while remaining zero-allocation and highly flexible.
- It beats exp4j by 3.5–9×, crushes mXparser, and stays within 10–12% of the best Janino variants.
- Zero GC pressure makes it ideal for high-frequency trading, real-time systems, and large-scale backend services.
ParserNG combines blazing speed, excellent API, and advanced features (symbolic differentiation, matrices, solvers, etc.) that others lack.
ParserNG Turbo represents the clear winner for real-world use, standing as the dominant king of all Java math interpreted parsers — combining near-native speed, true zero-allocation behavior, unmatched flexibility, and rock-solid deployment safety.
ParserNG is a pure interpreted engine — it generates no class files/bytecode at runtime.
This gives it unique advantages in environments where dynamic class generation is restricted or problematic: This gives it decisive advantages no bytecode-based solution can match:
- Superior compatibility across the entire Java ecosystem (desktop, server, mobile, embedded)
- Complete avoidance of DEXing issues on Android (no multi-dex, no 64k method limit problems)
- Works seamlessly in Android, Spring Boot, Quarkus, GraalVM native images, and secured JVMs
- Zero risk of
ClassLoader/SecurityManagerconflicts or generated-class memory leaks - Simpler deployment and better compatibility across the Java ecosystem
It stays within 10–15% of the absolute fastest specialized solutions (FieryJanino / Paralithic) while offering:
- True zero-allocation (eliminates GC pauses)
- Superior API flexibility
- Excellent scalability with large numbers of variables
- A rich feature set no competitor matches
While others trade speed for simplicity or vice versa, ParserNG Turbo delivers the best overall package.
In the war of Java math expression parsers, ParserNG Turbo stands tall.
It delivers performance that rivals or beats specialized bytecode compilers while maintaining the safety and portability that only a pure interpreted solution can guarantee.
ParserNG — Blazing Fast. Zero Compromise. Built for the Real World.
Ready to win? Clone the repo, run the benchmarks, and join the fast lane.
Made with 🔥 by GBENRO JIBOYE (@gbenroscience)
Benchmarks updated May 1, 2026.