Skip to content

Conversation

WebReflection
Copy link
Owner

@WebReflection WebReflection commented Jan 22, 2025

This is the cleanest approach for RAM:

  • no GC pressure around many arrays creation, everything is placed directly in the resizable array or buffer
  • memory consumption is linear and it never exceeds the reulting Uint8Array boundaries
  • all utilities are JIT-able and there are no classes anymore encoding class is the way to go

... and yet, this is slower than the other MR ... I am almost out of resources!

This is, however, the closest thing to C I've written in a while but it looks like ui8a.buffer.resize(...) is way slower than previous approaches so next up is a flame graph or what could be done to improve encoding performance ... right now the chart is not looking good, the other MR seems to do better ... which is a non-sensical bummer to me!

@coveralls
Copy link

Pull Request Test Coverage Report for Build 12907568183

Details

  • 324 of 324 (100.0%) changed or added relevant lines in 5 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 100.0%

Totals Coverage Status
Change from base Build 12887736317: 0.0%
Covered Lines: 646
Relevant Lines: 646

💛 - Coveralls

@coveralls
Copy link

coveralls commented Jan 22, 2025

Pull Request Test Coverage Report for Build 12913228005

Details

  • 447 of 447 (100.0%) changed or added relevant lines in 5 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 100.0%

Totals Coverage Status
Change from base Build 12887736317: 0.0%
Covered Lines: 666
Relevant Lines: 666

💛 - Coveralls

@WebReflection WebReflection force-pushed the array-buffer branch 2 times, most recently from 48b122b to 73bd120 Compare January 22, 2025 13:47
@WebReflection
Copy link
Owner Author

WebReflection commented Jan 22, 2025

OK, it looks like just passing a forever growing array incremented via i++ is the fastest thing I can provide as default ... everything else felt more like an exercise than an improvement over JS abilities to properly handle buffers and memory.

I am actually pretty pleased with the current performance and I went back to class because somehow JS handles that better.

@WebReflection
Copy link
Owner Author

for history sake, I went through:

npx clinic flame -- node test/encode.js measurements2.txt

multiple times ... there is literally nothing on the hot path except:

  • Map.prototype.set ... nothing I can do about it, I need Map to avoid recursion
  • GeneratorPrototypeNext ... so I've dropped all for of I could and got improved but ... it's there
  • __write@GLIBC__ which I've no idea what it's doing ...

In short, if my perf debugging ended up lurking the v8 internals I feel confident the code does all it can to be as fast as possible, but it can't go faster, unless proven wrong explicitly.

  * no GC pressure around many arrays creation, everything is placed directly in the resizable buffer
  * memory consumption is linear and it never exceeds the reulting Uint8Array boundaries
  * all utilities are JIT-able and there are no classes anymore

... and yet, this is slower than the other MR ... I am almost out of resources!
@WebReflection WebReflection merged commit 25775d1 into main Jan 22, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants