-
Notifications
You must be signed in to change notification settings - Fork 6
Refactor fastJsonFormat for high-performance JSON formatting and Unicode decoding #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor fastJsonFormat for high-performance JSON formatting and Unicode decoding #3
Conversation
Optimized fastJsonFormat by inlining whitespace and atom scanning loops. Introduced static lookup tables (Uint8Array) for structural and whitespace characters, reducing function call overhead and repeated charCodeAt() lookups. Benchmark improvements: ~10–20% faster on large JSON inputs. Refs: #1
|
@helloanoop, I would be grateful if you could review my pull request. |
|
@Sumith-Kumar-Saini can you share screenshots of benchmark run before and after your changes, so that I can see the exact improvement across different data sizes |
|
Hey @helloanoop 👋 Benchmarks:
System (for reference):
Let me know if you’d like additional sizes or runs. |
|
@Sumith-Kumar-Saini This is fantastic. Wanted to have further discussion, Can you accept my connection on Linkedin. Lets talk there |
|
Thanks! Glad you found it useful. I've accepted your LinkedIn connection — happy to continue the discussion there. |
|
Hey @Sumith-Kumar-Saini Really impressed with the perf improvement 👏 👏 I ran benchmark on my system, while the performance doubles for files around 100kb, it gets worse than the current performance for larger data sizes. Could you check what might be causing that ? I am running this on Macbook M4 Air 16GB Ram
|
|
Thanks for catching that, @helloanoop, I’ll dig into the large file performance issue and see what’s causing the slowdown — will share an update soon. |
|
Thank you @Sumith-Kumar-Saini The performance improvements in your approach are significant, so I do want to get your PR merged. |
|
Fantastic @Sumith-Kumar-Saini Latest benchmark improvements are 🔥
|
|
@Sumith-Kumar-Saini Could you fix the conflicts ? It resulted since we merged this PR to decode forward slashes. |
|
Hey @helloanoop, I have dug deeper to identify what was causing the long operation and found that the Use of AI Assistance
Code Update
Question
Performance Details
Current Program Benchmark
New Implementation Program Benchmark
Future Plans
Lastly, could you please check the program's performance and compare it with the current codebase? |
|
Hey @helloanoop, The tests have passed successfully, and this branch is now ready for review and merge. Quick Question
|
|
Great job @Sumith-Kumar-Saini ! Below are the before and after benchmarks on my machine
|
No need to apologise. I would infact encourage one use of AI to augment themselves, so long as you know and understand the logic and reasoning behind the code.
Yes
Anything as long as it results in the improvement of the benchmarks and runs on browser. |




Description
This PR refactors
fastJsonFormatinsrc/index.jsto improve speed, memory efficiency, and Unicode handling.Key updates:
decodeUnicodeString()for proper\uXXXXdecoding (including surrogate pairs).Uint8Arraylookup tables for structural and whitespace characters.scanString,scanAtom, etc.) and simplified logic.Technical Details
Benchmark
CPU: Intel(R) Core(TM) i5-8279U CPU @ 2.40GHzRAM: 16.0 GBGPU: Intel(R) Iris(R) Plus Graphics 655Testing
JSON.stringifyand legacy implementation.fastJsonFormattests.