Fix a potential bug when all elements are individually deleted from a hash #1410
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Two optimisations were coming together in an unforeseen way, which could
cause an assertion failure on a debugging build.
There's a space optimisation for empty hashes which only allocates the
control structure, and flags this with both
cur_items
andmax_items
set to 0.
To avoid repeated checks in the insert code for probe distance overflows,
if an insert detects that it has caused any probe distance to hit the limit,
it signals that the next insert will need to reallocate by setting
max_items
to 0.What I'd not realised was that if a hash has a series of inserts that chance
to end with that state of "next insert needs to reallocate", but there are
no more inserts, then
max_items
remains set to 0 to flag this, and isnever "cleared". This usually doesn't matter, but in the specific unusual
case that the code then systematically deletes all entries in the hash
(without ever making any more inserts) that the count
cur_items
willreturn to 0, and hence the special case "empty hash" optimisation flag
state is set, but with a regular allocation.
This doesn't happen reliably (it depends on hash randomisation), but could
sometimes be hit by t/spec/integration/advent2012-day14.t
To trip the assertion, both assertions and HASH_DEBUG_ITER need to be
enabled. If these aren't hit (and abort the process), I think that the
upshot of hitting this bug would be that part of a larger section of memory
would be returned to the FSA pool for a smaller bin size. Given that the
larger section of memory was still deemed allocated from its bin, and the
FSA never returns blocks of bins to free() (until global shutdown), I don't
think that this could cause any memory corruption.