Some high-performance immutable data structures #214

GregRos opened this Issue Mar 19, 2013 · 33 comments


None yet

6 participants

GregRos commented Mar 19, 2013

Around a month ago I said some stuff about having unfinished immutable data structures that I thought might be useful. It so happened I got a bit busy and hadn't had time to finish them properly, until around a week ago.

My goals were very performance oriented and I ran into a few dead-ends in that regard, so I decided to implement them in C#. I ran some benchmarks and I'm quite happy with the results.

Note that I'm not sure how much the change of language affected the results. The C# compiler seems to optimize more, but more importantly, what you write in C# translates very closely (with some obvious exceptions) to IL code. What you write in F# doesn't, and it is thus very difficult to optimize when optimization boils down to things like minimizing method calls.

I'm not sure whether you'll want to use them because both the language and the interface are quite different from other the other collections you have (though an F# assembly includes some inline functions and operators). However, according to my benchmarks the Vector implementation performs several times better than FSharpx's Vector at random access (although it is implemented using a similar data structure) and the deque (called Sequence; it's not exactly a deque, to be honest. ) supports many fast operations, including get and set by index, subsequences and splits, concat, insert in the middle, and some more things. This is in addition to practically real-time access and add/drop from either end. It is implemented as 2-3 finger tree that is highly optimized for .NET

There is also a HashMap, implemented as an array mapped trie (similar to the Vector), which uses equality semantics instead of comparison semantics, and early benchmarks showed it performed significantly better than FSharp's own map for certain inputs, such as strings.

There is still some testing to be done and some modifications to make. They're pretty much completed though.

Link to the library:
Direct link to the data structure wrappers that expose the user interface. Most of everything else is internal.

Here are some benchmarks, in ms. tests were performed on 100k length collections added from the 'Last' end.
Although some collections support some operations (e.g. FSharpList supports List.nth) I chose not to include them because they performed very badly. I also did not include RealTimeDeque because it gave me StackOverflowException, presumably because of how the LazyList is implemented.

The benchmarks were performed with all optimizations enabled, no debugging, and without the DEBUG compilation symbol. The latter is important because there are some conditional consistency checks that have a serious impact on performance that will get compiled.


Very interesting. Hope to make the time to take a close look within a week.

GregRos commented Mar 20, 2013

On a different note, does anyone know what I should call the Sequence<T>? As it is, it is likely to cause confusion with Seq<'a>. I hesitate to call it simply 'deque' because it is a lot more than that. In addition, it is also not a common word and people might not know what 'deque' means.. Calling it a finger tree is very technical and most people wouldn't understand what it's for.

I'm also not sure whether the data structures should implement any particular interfaces, and if so, which ones.

forki commented Mar 20, 2013

I'm very interested in the Vector implementation. Last time I checked my version was already faster than the one in clojure.core. If we get it still faster then this might help a lot. The vector is such a nice and useful data structure.


@forki Yes my benchmarks showed the current Vector to be way fast in every benchmark I ran. If this one is faster it will be huge! Hope to look at it this weekend, or maybe some evening this week.

@GregRos I may have a better name after looking at Sequence<T> , perhaps Segment<T> ?


@GregRos I've looked at Solid.Sequence<'T> (the C# version, which I accessed from F#) and more than ever I think "Segment", as in "line segment", is the best name for it. It looks like you can perform all the line segment operations of the integer number line, which is very exciting. I also think it would be a great addition to the linear functional data structures in the FSharpx.Collections namespace.

I think you have a better naming standard for your functions (and you should probably keep it for C#), but I feel even more strongly an F# library should follow the naming standard set by Microsoft.FSharp.Collections.List for linear functional data structures.

So my first suggestion is renaming almost all Segment's functions (except the ones that are completely new) to follow the other FSharpx.Collections linear functional data structures.

Others observations/remarks:

  1. It really needs intellisense/xml comment tooltips. Please start the tooltip with the time complexity (if appropriate).
  2. Empty and several apparently O(1) functions would be better implemented if they were values and did not return unit.
  3. Most, if not all, of the type members should be available as module let bindings as well to facilitate composition through piping.
  4. Needs a module ofSeq binding.
  5. Needs active patterns of Cons (head, tail) and Conj (initial, last).
  6. I did not investigate it, but I assume "foreach" and "foreachBack" are "iter" and "iterback". Should be renamed to F# standards.
  7. fold and foldBack functions would be useful.
  8. I did not investigate what it does, but would it be appropriate to rename "DelaySequence" to "LazySegment"? There is already LazyList in FSharpx.Collections, so this name would be more intuitive.

I also performed the following benchmarks against FSharpx.Collections.Deque and Vector. See and for more on the framework and methodology I used. I'd be happy to make the complete results spreadsheet available to you.

All in all Segment performs very well and since it is so versatile will be a great addition to FSharpx.Collections. I only benchmarked functions that are comparable to functions in Deque and Vector, but I'm looking forward to learning what the time complexity of the other functions are.

"Scale" refers to the number of elements in the structure benchmarked, or the size of the resulting structure. Every benchmark performed at scales 100 - 100,000 by power of 10.

initial data is always an F# int32 array

the following benchmarks compared Deque/Segment

  1. construct structure by "cons" (AddFirst) one element at a time

Both performed nearly identically until scale 100,000, when Segment took over 400% as much time

  1. construct structure by "conj" (AddLast) one element at a time

same result as cons

  1. construct structure by alternating "cons" and "conj" (AddFirst, AddLast) one element at a time

same result as cons and conj, except the slowdown for Segment at scale 100,000 was only 300%

  1. initialize structure using ofSeq (ofSeq not available for Segment, so Seq.fold used)

comparable times until scale 10,000 -- Segment took 300% as much time, and scale 100,000 -- Segment took 3,100% as much time

  1. iterate in a tail recursion taking tail (DropFirst) until empty

essentially the same performance at all scales; at scale 100,000 Segment took 133% as much time...hardly worth mentioning given the vagaries of benchmarking

  1. consume the structure in a simple Seq.Fold (essentially comparing the efficiency of the GetEnumerator operations)

Segment took 184% as much time at scale 10,000 and 360% as much time at scale 100,000

  1. reverse the structure

both structures have very fast O(1) reverse, even so, Segment smoked Deque on this benchmark: on a fast i7 -- 3 ticks vs. 250 ticks

The remaining benchmarks compare Segment against FSharpx.Collections.Vector

  1. 10,000 random look-ups against structure size

Vector beat Segment by almost 10X at the low range of scale to almost 20X at scale 100,000

  1. 10,000 random updates against structure size

Vector was only about twice as fast at scale 100 -- by scale 100,000 performance was nearly identical

GregRos commented Mar 24, 2013

I've today performed several commits that implement most of your suggestions.


There are two general reasons why I want to keep the instance interface C# inspired instead of F# inspired.

  1. People who write F# are, in general, familiar with .NET terminology and interfaces, but people who use other .NET languages aren't familiar with terms like cons, snoc, map, and conj. They might even cringe at the lack of capitalization.
  2. When writing a class for use in F#, most of the interface is going to be in external modules. This is vastly preferable to instance methods for many reasons. However, when using C#, instance methods are preferable to static methods. It is easy to write an external module for F# and forget the .NET instance interface even exists. It is also easier to extend an instance interface because F#'s system of type extensions and type aliases is a lot more powerful than C#'s. It is harder and less convenient to tailor existing instance methods for use in C#.

Incidentally, the main reason why certain members are unit -> * is to conform to the standard .NET interface.

It is possible to rename the methods to better fit in with F# and FSharpx. The external interface is a wrapper with little logic of its own anyway. However, if I do this, I'd have to have separate wrappers for C# and F#. Or create a wrapper for a wrapper. I'm not sure how much this would be worth it.

About DelayedList

This is actually something similar to a computation expression rather than a lazy data structure.

New Additions

I named Sequence<T> FlexibleList<T> with an alias xlist<'a> = FlexibleList<'a> for use in F#. I also added two modules, XList and Vector which contain an interface tailored more to F#. These are in the assembly/namespace SolidFS, along with a pair of computation expressions. The modules contain almost everything the instance interface does. I'll gradually fill in any blanks. I've also added comments and in particular, both time complexity and a general hint about expected performance. This information also appears in the instance interface.

I still need to implement efficient bulk loading for Vector and efficient RemoveAt for xlist.

I also added 'negative indexing'. This is something I like from other languages. Basically, every method that takes an index can take a negative index, which is treated as distance from the end. E.g. -1 is the last element, -2 is the next to last element, and so forth.
In C#, we can write:

var lastElement = list[-1]; 
var slice1 = list.Slice(15,-2); //Slice ranging from index 15 to before last
var slice2 = list.Slice(-3,-1); //Slice containing 3rd to last, 2nd to last, and last

In F# we have a GetSlice extension method which we can use like this:

let slice1 = list.[15 .. -2]
let slice2 = list.[-3 .. -1]


Some of the results are somewhat surprising and I haven't been able to recreate them. Specifically, I haven't seen any sudden performance hit compared to Deque when doing cons/conj although I've tested all the way up to 10⁶ after which I had memory problems. Deque beats xlist by a consistent 150%-200% for me.

I also didn't see such a big difference between FSharpx.Vector and xlist when doing lookup.

It's possible that this is because of some test infrastructure. I'll try to perform a cleaner test later. Could you share the specific code you used to perform the benchmarks? These are my benchmarks, These are wrappers that unify interfaces, and this is how I run the tests. Specifically, invoke_test contains the actual test execution.

In theory, the wrapper classes may interfere slightly with some benchmarks.

I'd like to point out that using Seq.* methods contaminates benchmarks as they perform orders of magnitude slower than direct iteration, so you may end up benchmarking them instead of their input. It is also preferable not to use recursion, as the underlying implementation is not guaranteed to be efficient.

Finally, xlist.Reverse() is O(n) so something may have gone wrong in that benchmark.

Besides that everything more or less agrees with my results.

forki commented Mar 24, 2013

Just as a side question: do you want to contribute the stuff to FSharpx? If yes then we should find a way to intergrate C# stuff and ILMerge it or so.

GregRos commented Mar 24, 2013

Of course. Can this be done as part of the build script, if the source code were merged into Fsharpx?

forki commented Mar 24, 2013

yep. I will put this into the build


Integration into FSharpx

I would like to see XList in FSharpx.Collections, but it should go into FSharpx.Collections.Experimental first, so it can go through a couple of rounds of breaking changes, if need be.

There's probably a good way to make xlist a full-fledged F# type (and capitalized to XList), I just don't know the right way:

1) If it stays an alias, perhaps an .fsi file over Operators.fs would allow for intellisense tooltips over the type. I don't see an XML file generated for Solid.exe. (my mistake 3/25)
2) Rather than aliasing FlexibleList, inherit from it. I don't know how bad the performance hit is doing this.
3) Do something with ILMerge? I don't have any knowledge here.

It seems you are on the road to making XList a first-class F# to reconsider function naming? :)

Further Suggestions

  1. put the active pattern discriminators inside the XList module
  2. functions that are designed to throw an error should have a corresponding Try[function name] function returning Option, and the active pattern discriminators should be referencing these Try... functions


Your benchmarking is measuring simple function timings. That's useful. Many of my benchmarks incorporate the function in some simple iterative process that's a common pattern in functional programming. Of tail-recursion, tail-recursion with continuation passing, and non-tail-recursion, the only recursive wrapping process I benchmark with is tail recursion. The critical thing is like tests have like wrapping code and like code leading up to the benchmarked code so the GC state entering the benchmarks is consistent. Ultimately like benchmarks are still apples to apples comparisons.

This explains how you see Deque consistantly beating XList by 150-200%, while I am seeing better performance until the larger scales.

Yes, something went wrong with my xlist.Reverse() benchmark. It was a cut-n-paste bug. I created the xlist benchmark from the Deque benchmark, replacing "Deque" with "xlist". [Deque obj].rev is O(1) and doesn't take closed parens, but [xlist obj].reverse just returns the function and not the function's result. (I should have known something was wrong taking just 3 ticks, but I thought you had written some fast code!)

Between updates and correcting the reverse benchmark, some additional information

  1. active pattern benchmark
    xlist outperformed deque by about 30% at scale 100, but by scale 100,000 they were pretty equal

  2. reverse benchmark
    the O(1) deque.rev won by 13:1 at scale 100 and 33:1 at scale 100,000

  3. now that xlist has ofSeq, it did much better loading data, although deque still prevailed at scale 100,000 by 13:1


I didn't explain my benchmarking very well. Here are the kind of things I want to benchmark for data structures:

How long does it take to load sequential data of length X into the structure using ofSeq?
How long does it take to load sequential data one element at a time?
How long does it take to decrement a structure of size X down to empty, one element at a time?


As to function naming, you can use the CompiledNameAttribute on a F# function to have it make a Pascal-cased name when referenced in a language other than F#. You can see an example in the Frack project. You can also use a different name for the compiled name:

[<CompiledName("Add")>] let cons a b = ...
GregRos commented Mar 26, 2013


@jackfoxy Those are actually the exact same these I've been measuring. I'm not measuring individual function calls, but the exact things you mentioned.

After playing around with the benchmarks I reached the conclusion that they were even more sensitive than I thought. In the end I got rid of all test infrastructure, including lambdas, wrappers, and extraneous method calls. I performed every benchmark many times, discarded the first few iterations, and summed up the remaining results. Then I iterated this process several more times, and averaged it out.

This is in addition to standard precautions such as garbage collection, etc.

I did all this because seemingly inconsequential things made the results unstable. With this sort of routine I believe I achieved stable results, at least where the testing environment is concerned.

My results were that in general, Deque outperformed xlist by a factor of 2 to 15. The factor did not change in a predictable manner, but jumped around. It moved between these two boundary points back and forth several times while I increased the input size gradually, as 10^4, 10^(4.1), 10^(4.2), 10^(4.3), ..., 10^5. As such the exact choice of input size heavily affected the results.

This is due to xlist's unpredictable worst-case. It provides O(1) amortized deque operations (with the exception of peek at either end, which is always constant), with a worst-case of O(logn). More accurately, the cost of a deque operation ranges between 1 and logn, with more costly operations becoming rarer and rarer. Fairly minor changes in the input size coincidentally shift the balance between more and less costly operations, and thus cause the benchmark results to appear erratic.

Changing the access pattern also changes the incidence of the worst case. In general, the least favorable sequence of operations is a long sequence of additions to one end. Other access patterns, such as adding items from different ends, generally provide more favorable results.

I'm not entirely sure how, given the circumstances, to empirically compare the performance of the two data structures. Especially when considering the worst-case performance of Deque, which occurs fairly often throughout the lifetime of the object, cause a severe performance hit, but are in general unlikely to be observed in pure benchmarks such as these.

As an example, a single call to Head after a sequence of Conj will generally cost as much as all the operations that preceded it.


I'll look into this. The source code is in C# so I cannot use the CompiledNameAttribute. I tried to use attributes like CompilationMappingAttribute but F# ignored them.

...However, if I moved the implementation of the wrapper to the F# assembly I could indeed make use of the CompiledName attribute, as well as certain other F#-centric features. In fact, I don't believe there is anything related to the wrapper I could not implement in F#. I will see how much work this requires.


Benchmarking makes data analysis look like science...

I overlooked something in your benchmarks (the link at the top of this threaad). You are including Fsharpx.Collections.SkewBinaryRandomAccessList in your benchmarks. That structure does not exist in FSharpx.Collections. It does exist in FSharpx.Collections.Experimental (and in the obsolete FSharpx.DataSructures). That leads me to suspect you were comparing against Vector and Deque in Experimental as well. The Vector in FSharpx.Collections is slightly more performant than the one in Experimental, and the Deque in FSharpx.Collections is not even the same Deque as in Experimental.

By the way, the implementation of RandomAccessList now in Fsharpx.Collections is the Fsharpx.Collections.Vector inverted.

There's always going to be something less than ideal about any benchmark and many cycles of reasoning over the meaning of worst case scenarios, GC, etc. GC is a real issue with all data structures, so benchmarks that call GC during execution also make useful measurments.

And of course once you stick your neck out and publish a benchmark, the first things others see in it are the problems.

Two different approaches provides more information, which can only be good because ultimately benchmarking only serves to update our Bayesian priors.

I just committed with the latest Solid benchmark code.

Unrelated, but this was throwing me for a while: there is a Vector in Microsoft.FSharp.Math within PowerPack and if you reference PowerPack in your project, even though you do not open it in your current file, the tooltips for that module will mix-up with the tooltips for Solid.Vector or FSharpx.Collections.Vector. Annoying.

Vector Benchmark

I recenlty received "FlatList", from Don Syme. It's immutable, has a limited sub-set of the Vector functions, and some of it's functions are very fast. I put it in the benchmarks with the 2 vectors, where appropriate.

  1. construct structure by "conj" (AddLast) one element at a time

very close between Solid and FSharpx, with FSharpx gaining a small advantage from scale 10^4 - 10^6.
FlatList is 6X slower at scale 10^4 and >100X slower at 10^5

  1. initialize structure using ofSeq

Solid and FSharpx are nearly identical until scale 10^5, where FSharpx becomes nearly 1:2 faster.
FlatList crushes the competion here. At scale 10^6 it is 1:86 faster than FSharpx.

I even re-benchmarked FlatList with input data in a list (instead of an array), and results were nearly identical (it did take twice as long at 10^6, but that was still 43:1 faster than FSharpx using an array).

  1. iterate through all the elements in a loop

Close at smaller scales, but by 10^5 Solid is 1:3 faster than FSharpx. and at 10^6 is over 1:4.
FlatList beats them all. At 10^6 it beats Solid by 1:13.

  1. iterate through a simple seq.fold (Intended to be a proxy for comparing IEnumerable efficiency, List.ofSeq may have been a more direct approach...still an apples to apples comparison)

Solid consistently outperforms FSharpx by about 5:8 accross all scales.
FlatList outperforms all and is nearly 1:2 against Solid at all scales.

  1. 10,000 random lookups by index

FSharpx consistantly beats Solid by 1:2 until 10^6, where it beats Solid 1:5.
FlatList consistantly beats FSharpx by about 1:1.3 across all scales.

  1. iterate in a tail recursion taking initial (DropFirst) until empty

FSharpx beat Solid consistantly by 2:3 across all scales.
FlatList does not implement any comparable functionality.

  1. 10,000 random updates by index

Solid beats FSharpx at scale 10^2 by 2:3, and by scale 10^6 it is 2.5:11.1
FlatList does not implement any comparable functionality.


There is definitely room for all sorts of human error on my part in any of these benchmarks. And I never verified Solid or FlatList actually function as expected.

Neither Solid or FSharpx is actually slow in any of these tests. FlatList performs so fast on certain tests because it implements a simple array internally.

Here are my recommendations:

  1. Solid.xlist, Solid.Vector, and FlatList all should be in FSharpx.Collections.Experimental. The Solid.Vector name might be an issue, because FSharpx.Vector exists as type Vector.vector and module Vector in FSharpx.Collections.Experimental. I have no suggestions on what to do about this. (I will work on getting FlatList into Experimental.)

  2. After sufficient time, and we are confident no more breaking changes may come around, xlist and FlatList should have places in FSharpx.Collections. They are both unique and useful functional linear data structures.

  3. From these results it is hard to say Solid.Vector is better enough than FSharpx.Vector to replace it in FSharpx.Collections. (That could change if some other benchmarks come in.) In fact it seems to be more of a draw. Solid.Vector does have more module level functions (which I have not benchmarked, including functions like fold which are comparable with functions in FSharpx.Vector.) There is also the issue of breaking changes. If Solid.Vector does replace the current Vector it has to be without breaking changes (or during a major library revision). At any rate, the current Vector could certainly benefit from the additional module functions (any volunteers?), especially since Vector is one of the most performant and versatile of the structures in the collection. (It's possible I culled code from the Experimental version that would be helpful with new functions when I created the FSharpx.Collections version.)

GregRos commented Mar 29, 2013

Some of the results of the benchmarks don't exactly mirror mine. For example, this benchmark and a few more like it show that Solid.Vector beats FSharpx.Vector at random lookups (ignore the commented out lines; whether they exist or not the results are the same) However, the difference is more or less a small constant factor, so it's not very significant.

The one feature Solid.Vector has implemented that FSharpx.Vector does not is a Take(n) method that returns a new vector consisting of the first n elements. This method performs as fast or faster than a single call to Update.

Another feature I intend to implement is efficient bulk loading that modifies arrays in-place and should perform a lot faster than iterating conj.

However, because FSharpx.Vector is essentially the same data structure as Solid.Vector (a type of array-mapped trie), excluding some implementation details, there is no reason why it can't have the same operations (as far as I know). I'm not familiar with the FSharpx.Vector implementation, so I can't implement them myself, but anyone who is familiar with it should be able to add them.

Here is an explanation of how it's done:
It is original (as far as I know).

I recommend starting with the Take(n) method. The reason I haven't gotten around to the bulk loading is that it is a little tricky. You may need to change some aspects of the implementation for it to work.

Another thing I've been thinking about is the following paper:

However, these modifications are rather complicated and I am afraid the performance of the data structure will go down. xlist can do all of that and more, and it does indexing pretty fast, if not nearly as fast as vector data structures, so the implementation described there may simply remove Vector's edge at indexing, since finding the local index at a given trie node will become a lot more complicated.


One type of bug F# does not protect against is cut-and-paste errors.

Here is the corrected benchmark for 10,000 random lookups (times in mill.)

flatlist 10^2 0.0
solid.vector 10^2 0.2
fsharpx.vector 10^2 0.3

flatlist 10^3 0.0
solid.vector 10^3 0.2
fsharpx.vector 10^3 0.3

flatlist 10^4 0.0
solid.vector 10^4 0.2
fsharpx.vector 10^4 0.3

flatlist 10^5 0.0
solid.vector 10^5 0.2
fsharpx.vector 10^5 0.3

flatlist 10^6 0.0
solid.vector 10^6 0.5
fsharpx.vector 10^6 2.6


FlatList is already available in the fsharp source code. Should we just remove the internal in the OSS drop, or is it better to copy the file into F#x?


That's basically the plan for FlatList, but adding tests as well and preserving the copyright notice.

GregRos commented Mar 29, 2013

Whoops. I realized that while writing the explanation for the bulk loading I've already sorted out all the difficulties. I committed a version of Solid.Vector with bulk loading.

If you call ofSeq or <++ or AddLastRange with an array argument it generally runs around 40 times as fast as FSharpx.Vector. However, if you supply another type it has to convert it to an array and so is generally only 10-35 times faster.

This is done by trying to find the collection's concrete type from a small list of alternatives (such as T[], List<T>, FSharpList<T>, FlexibleList<T>, etc) and calling the best ToArray method available. If the sequence doesn't happen to be one of the predefined types, then System.Linq.Enumerable.ToArray is called, which is generally rather slow.

If a type has a ToArray method it is always preferable to convert it instead of hoping the implementation will figure out how to do so.

As such, the performance depends on 1) if the type is convertible to one of the predefined types and 2) how well ToArray is implemented for that type. For example, if the input is convertible to FSharpList, it is 20 times faster than FSharpx.Vector. If the input is convertible toSystem.Collections.Generic.List it is 35 times faster, while if it is convertible to FlexibleList it is 18 times faster.

If we are forced to default, System.Linq.Enumerable.ToArray is slower than any of these, and depends on the implementation of the enumerator, but is generally around 10 times faster.

These numbers are for comparison only. Performance scales upwards in general, meaning the speedup factor grows with the size of the input sequence and a little with the current size of the Vector.

It may be possible to improve performance slightly.

It hasn't been tested thoroughly yet, and there are likely some edge cases in which it may not work.


@GregRos I can also benchmark with strings, but from my past experience the relative comparison of structure A to structure B is usually pretty similar, so I just didn't think it was worth my while.

Your bulk load change to Solid.Vector made a big difference. From loading 10^6 integers in 114.1 ms to 3.0 ms. Although it looks like it actually slowed down a little at scales 10^2 and 10^3,but that could be margin of error between runs. Here's the full ofSeq benchmark for loading from an array of int.

flatlist 10^2 0.1
fsharpx.vector 10^2 1.0
solid.vector 10^2 1.3

flatlist 10^3 0.1
fsharpx.vector 10^3 1.0
solid.vector 10^3 1.3

flatlist 10^4 0.1
solid.vector 10^4 1.3
fsharpx.vector 10^4 1.5

flatlist 10^5 0.2
solid.vector 10^5 1.4
fsharpx.vector 10^5 5.7

flatlist 10^6 0.7
solid.vector 10^6 3.0
fsharpx.vector 10^6 60.6

You should work on getting Solid.Vector and Solid.XList intoFSharpx.Collections.Experimental.

GregRos commented Mar 30, 2013

Ugh. Rewriting the wrappers in F# and using the CompiledNameAttribute in the way I had intended is going to cause significantly more harm than good in terms of interoperability between the languages. I'm going to create a second wrapper in F# that wraps the C# wrapper, and you can decide which types are publicly visible as part of your build script. You can also change the names of the members again if you want since I doubt they will be used for anything internally.


Create a signature (.fsi) file to control the visible API.


#216 Collections.Experimental.FlatList

GregRos commented Mar 31, 2013

I thought of another addition to Solid.Vector: fast removes. The thing is, when you remove an item, there is no reason to actually change the underlying array. The data structure already contains all the information needed by the new copy. So instead, I'm going to have a "drop counter". Whenever you remove an element from the end it increments the counter, which in effect modifies the 'virtual length' of the array. When you perform another operation which does change the underlying array then the removes will be applied for real. I expect this will make removes almost instantaneous, but will probably require some modifications.

However, it will be a bit of time before I add this in, as most likely I will be inactive in the next days due to real life issues. The same goes for the interface changes.


My knee-jerk reaction is to not get fancy with deferred time complexity debits, but upon further reflection O(1) removes may well be worth the added complexity, so long as there are no unknown edge cases lurking to bite you.

GregRos commented Mar 31, 2013

Oh no, there is no deferred execution involved. All the work involved here is allocating and copying an array. It is no more difficult to copy all indices up to N than to copy the entire array.

That said, additional logic may need to be added to all the operations, but at no point will any operation actually be deferred.



GregRos commented Apr 12, 2013

Hey there. Sorry for the longish absence.

I'm going to add the F# wrappers today. I've also added a Remove(int index) algorithm for xlist.

One thing we haven't talked about is the HashMap class; I've also haven't been paying much attention to it. I ran some tests and it is faster than FSharpMap in every operation, but only by a constant factor. It is the fastest at lookup and contains.
There are several other benefits however.

  1. HashMap uses equality semantics rather than comparison semantics. These members can be easier to implement and are more widely available.
  2. It doesn't enforce the implementation of any interface, which could be problematic if you want to use an object you don't have direct control over as a key. Instead, you can supply an external IEqualityComparer object. This feature is not available in FSharpMap as far as I know.

It has a few bugs. If you think it might be useful, I'll fix those bugs and release it. Otherwise, I will implement some performance changes and see where that gets me.

By the way, I haven't implemented the remove method for the vector I was talking about. However, even as it is now, the vector still has a DropLast(m) method that removes the last m items that performs O(logn). This is derivative of the Take(m) method. For comparison, it is somewhat faster than an update.

I have been thinking about a different revision that is more complicated and probably involves a redesign of the data structure. It does involve real deferred execution and hopefully it should make every operation as fast as a bulk operation. I'm not sure when I'll start working on this though.

On a completely different note, I scaled back the required framework version to 3.5 Client Profile to make the library more accessible and removed tons of unnecessary references. In addition, the reference to nunit.framework is now a conditional reference based on whether the DEBUG symbol is defined.


It would be sweet if Vector and xlistboth have time to mature in experimental and get promoted to the Collections namespace by the time of LambdaJam in early July. I'm giving a talk in which fsharpx plays a prominent role. HashMap sounds worthwhile for sure.


@GregRos hope you are still planning on contributing to the project. I've been swamped by my day job, myself.


I just ran across this post while looking for an immutable vector that supports no worse than O(log n) insert/append. What's the current status on bringing some of @GregRos's Solid library into FSharpx.Collections?

jackfoxy commented Jul 5, 2014

@davidkellis it seems @GregRos kind of dropped off the map. I suppose he got busy with something else.

fsgit commented Sep 26, 2014

Closing as the Fsharpx collections have moved to Please reopen there if necessary.

@fsgit fsgit closed this Sep 26, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment