Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize generating many bitsets for serialization #365

Open
richardartoul opened this issue Aug 20, 2022 · 8 comments
Open

Optimize generating many bitsets for serialization #365

richardartoul opened this issue Aug 20, 2022 · 8 comments

Comments

@richardartoul
Copy link
Contributor

richardartoul commented Aug 20, 2022

One of our heaviest workloads uses this library to index a lot of ingested data in real time. In one part of the workload we basically have dozens of goroutines doing something like the following in a loop:

postings := roaring.NewBitmap()
for _, thing := range things {
    postings.Clear()
    for _, row := range $MATCHING_ROWS {
        postings.Add(row)
    }
    postings.WriteTo()
}

We've already heavily optimized the code that generates things to be very efficient and avoid allocations/pointers, however, when its time to actually serialize the index we're building we have to go through this library and it allocates like crazy even though we're trying to reuse a single datastructure.

I've provided a benchmark that demonstrates this along with some improvements in this P.R: #364

I realize that adding pooling of internal datastructures may be a big change, but I tried to structure the P.R so that its completely "opt in" via the new ClearRetainDatastructures() API.

EDIT: I can't provide screenshots of profiling for obvious reasons, but I just went back and checked and the allocations removed by this P.R represent ~ 56% of all allocations in our workload so this will have a very substantial performance impact for us (and others I expect who use the new API).

@richardartoul
Copy link
Contributor Author

Let me know what you think @lemire !

@richardartoul
Copy link
Contributor Author

@lemire Pinging again if you have a chance to look.

@lemire
Copy link
Member

lemire commented Sep 8, 2022

Yes... this is a reasonable use case, but please review my comments on the PR. I think we want a non-invasive solution. That is, we want something that does not fatten the structures and code for everyone, just to handle this one case.

You are probably limited by the memory allocation/garbage collection. And thus you try to create polls of arrays to get around the problem. That is fine, but it is also not a general-purpose solution. So we want to push the complexity, the potential bugs and all such things away from what most people use.

See my sketch of a counterproposal.

@richardartoul
Copy link
Contributor Author

Thanks for taking a look, I really appreciate it!

I think you're right that basically what I wanted was a custom allocator, so I responded with an alternative proposal that basically makes this explicit. Its slightly more invasive than your proposal, but much more generic and will lend itself to a much wider variety of incremental optimizations I think.

@richardstartin
Copy link
Member

There's another approach which I have been thinking of implementing in Java which should be easy to implement so long as the integers added to the bitmaps are sorted: avoid allocating memory in the first place since the bitmaps being built are themselves temporary. A serialized bitmap has two sections: the header and the containers.

The structure of the header doesn't lend itself to streaming:

field size (bytes) purpose
cookie 4 identification of a roaringbitmap
runcontainer bitset numContainers identifies when to interpret a cardinality as a number of runs
repeated container key - cardinality pairs 4 * numContainers reconstructing/mapping the index
repeated container offsets 4 * numContainers finding container bits

These quantities need to be known in their entirety before writing the header.

Though the header can't be streamed, the containers can: they're just written out contiguously and indexed by the offsets in the header. This lends itself to a lot of object reuse while streaming integers straight to an output: it requires a resuable bitset of runcontainer locations, lists of key/cardinality pairs and container offsets, a number of containers, and a reusable container to accumulate added integers into, or even just another reusable set to buffer values in the same container into.

As values are consumed, the high bits of the values need to be tracked, and whenever there is a new value in the high bits:

  1. the current key and cardinality is saved to the list of key/cardinality pairs to build the header from
  2. the current offset into the destination for the containers is appended to the list of offsets
  3. the current container is written to the destination
  4. if the current container is a run container, mark the header bitset

At the end, the header needs to be written into another destination, with the size of the header (which is calculable by this point) added to each container offset. If the destination is a file, it may be possible to move the serialized containers to the file containing the header without performing a copy (this is possible in Java but I don't know how to do it in Go).

This has another rather nice advantage of avoiding a binary search in the container index for each input value. The Java implementation has the concept of a RoaringBitmapWriter which can speed up construction from sorted inputs a lot because of this, though streaming straight to disk along these lines hasn't been implemented yet.

@lemire
Copy link
Member

lemire commented Sep 9, 2022

It does seem that what is needed is a bitmap writer. Of course, it would work much better if the input is sorted but if not, you could still partially sort it in a buffer before serialization. A bitmap writer for already sorted data could beat anything else in performance and keep allocations very scarce.

@richardartoul
Copy link
Contributor Author

A bitmap writer would be excellent for this particular use-case yeah, although it would still be nice to reduce allocations when reading many short-lived bitmaps. I think the frozen implementation is supposed to help with this, but I'd prefer if we could avoid that.

@lemire
Copy link
Member

lemire commented Sep 9, 2022

I recommend working on a bitmap writer. It is a well-defined problem, it is relatively easy to implement it safely and efficiently, the API won't be confusing and it will not be a significant burden on the code base. Furthermore, the Java implementation has already a powerful and fast bitmap writer (although not specialized for serialization..), so we have validation. If well implemented, it can be faster and simpler than anything else for this task which motivates you.

It is still not trivial to do: if you look at what Richard did in Java, it still requires handling efficiently different densities and data distribution so that the right container is picked. But, at least, if you assume that the input data is sorted, it is a tractable problem.

It does not preclude further work in other directions later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants