Skip to content

GH-133136: Revise QSBR to reduce excess memory held #135473

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

nascheme
Copy link
Member

@nascheme nascheme commented Jun 13, 2025

This is a refinement of GH-135107. Additional changes:

  • track the size of the mimalloc pages that are deferred
  • introduce _Py_qsbr_advance_with_size() to reduce duplicated code
  • adjust the logic of when we advance the global write sequence and when we process the queue of deferred memory
  • small fix for the goal returned in the advance case, it is safe to return the new global write sequence, not the next write sequence

With these changes, the memory held by QSBR is typically freed a bit more quickly and the process RSS stays a bit smaller.

Regarding the changes to advance and processing, GH-135107 has the following minor issues: if the memory threshold is exceeded when a new item is added, by free_delayed(), we immediately set memory_deferred = 0 and process. It is very unlikely that the goal has been reached for the newly added item. If that's a big chunk of memory, we would have to wait until the next process in order to actually free it. This PR tries to avoid that by storing the seq (local read sequence) as it was at last process time. If that hasn't changed (this thread hasn't entered a quiescent state) then we wait before processing. This at least gives a chance that other readers will catch up and the process can actually free things.

This PR also changes how often we can defer the advance of the global write sequence. Previously, we deferred it up to 10 times. However, I think there is not much benefit to advancing it unless we are nearly ready to process. So, the should_advance_qsbr() is checking if it seems time to process. The _Py_qsbr_should_process() checks if the local read sequence has been updated. That means the write sequence has advanced (it's time to process) and the read sequence for this thread has also advanced. This doesn't tell us that the other threads have advanced their read sequence but we don't want to pay the cost of checking that (would require "poll").

pyperformance memory usage results

colesbury and others added 3 commits June 3, 2025 21:29
The free threading build uses QSBR to delay the freeing of dictionary
keys and list arrays when the objects are accessed by multiple threads
in order to allow concurrent reads to proceeed with holding the object
lock. The requests are processed in batches to reduce execution
overhead, but for large memory blocks this can lead to excess memory
usage.

Take into account the size of the memory block when deciding when to
process QSBR requests.
Comment on lines 143 to 144
size_t bsize = mi_page_block_size(page);
page->qsbr_goal = _Py_qsbr_advance_with_size(tstate->qsbr, page->capacity*bsize);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be the right heuristic, but this is a bit different from _PyMem_FreeDelayed:

  1. _PyMem_FreeDelayed holds onto the memory until quiescence. It prevents the memory from being used for any purpose.
  2. _PyMem_mi_page_maybe_free only prevents the page from being used by another thread or for a different size class. That's a lot less restrictive.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah good point. The memory being held (avoiding collection) by mimalloc is not at all the same as the deferred frees. I reworked the PR so that memory is tracked separately. I also decoupled the write sequence advance from the triggering of _PyMem_ProcessDelayed(), used process_seq as a target value for the read sequence.

Now _qsbr_thread_state is larger than 64 bytes. I don't think that should be a problem.

nascheme added 4 commits June 16, 2025 13:13
* Keep separate count of mimalloc page memory that is deferred from
  collection.  This memory doesn't get freed by _PyMem_ProcessDelayed().
  We want to advance the write sequence if there is too much of it
  but calling _PyMem_ProcessDelayed() is not helpful.

* Use `process_seq` variable to schedule the next call to
  `_PyMem_ProcessDelayed()`.

* Rename advance functions to have "deferred" in name.

* Move `_Py_qsbr_should_process()` call up one level.
Since _Py_atomic_add_uint64() returns the old value, we need to add
QSBR_INCR.
Refactor code to keep obmalloc logic out of the qsbr.c file.  Call
_PyMem_ProcessDelayed() from the eval breaker.
@nascheme
Copy link
Member Author

After reverting the erroneous change to _Py_qsbr_advance(), the nice reductions in RSS I was seeing disappeared. After some experimentation, running _PyMem_ProcessDelayed() from the eval breaker works well. That seems to give enough time so that usually the read sequence has advanced such that deferred memory can be quickly freed.

I refactored the code to put the "should advance" logic into the obmalloc file. I think that makes more sense compared with having it in the qsbr.c file.

The dict_mutate_qsbr_mem.py.txt benchmark RSS sizes, in MB:

  • Running with the "main" branch, FT build (commit 1ffe913): 312, 543, 728, 912, 1142.

  • Default build using mimalloc instead of pymalloc: 89, 90, 134, 134, 90.

  • gh-133136: Limit excess memory held by QSBR #135107: 351, 374, 393, 484, 532.

  • This PR: 205, 260, 293, 312, 288, 288.

@nascheme
Copy link
Member Author

Updated pyperformance results:

run time

memory usage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants