Memory and performance optimization for large dynamically-sized array-based collections #73699
-
Hey. This is my first interaction on this repo. I love the .NET project and I'd like to contribute with a suggestion. What I'd like to suggest is basically an optimization for large lists / sets / collections backed by arrays in general. Basically these data structures re-allocate the backing array when we exceed the current capacity (we typically allocate double of what he had previously). For recurring creation of large collections this can strain the GC quite a lot, and considerably slow down the application. I figure if maybe we can somehow leverage We can make the collection return the last array upon garbage collection using their destructor (or make them I mostly just copy-pasted the current BenchmarkDotNet=v0.13.1, OS=Windows 10.0.22000
Intel Core i7-8565U CPU 1.80GHz (Whiskey Lake), 1 CPU, 8 logical and 4 physical cores
.NET SDK=6.0.400
[Host] : .NET 6.0.8 (6.0.822.36306), X64 RyuJIT
DefaultJob : .NET 6.0.8 (6.0.822.36306), X64 RyuJIT
The only caveat is that the Code for POC and benchmarks can be found here. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
This is a suggestion for the runtime repo. Moving it over there. |
Beta Was this translation helpful? Give feedback.
-
Duplicate of #27023 |
Beta Was this translation helpful? Give feedback.
-
Cc @dotnet/area-system-collections |
Beta Was this translation helpful? Give feedback.
Duplicate of #27023