use sparse bitsets instead of dense ones for NLL results #48170
Labels
C-cleanup
Category: PRs that clean code up or issues documenting cleanup.
I-compilemem
Issue: Problems and improvements with respect to memory usage during compilation.
I-compiletime
Issue: Problems and improvements with respect to compile times.
T-compiler
Relevant to the compiler team, which will review and decide on the PR/issue.
Milestone
As part of #47879, it seems clear that using dense bitsets is going to be a problem for NLL sooner or later. We should move to a sparse set. I think that the
BTreeMap
idea sketched in #47575 may be a good choice.The NLL region representation is defined here:
rust/src/librustc_mir/borrow_check/nll/region_infer/values.rs
Lines 186 to 200 in 16362c7
In particular, the field
matrix
uses aBitMatrix
. The "rows" in the matrix are inference variables, and the columns are the indices defined here (either cfg points or universal regions).I think we should add a new type
SparseBitMatrix
that offers the same API asBitMatrix
. It might be represented with aVec<SparseSet>
, per #47575, whereSparseSet
is a bit set represented as sketched in this comment. Or it might even just use a singleSparseSet
internally, where each bit is indexed by an indexI = (row * num_columns) + column
.The text was updated successfully, but these errors were encountered: