-
-
Notifications
You must be signed in to change notification settings - Fork 30.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GC operates out of global runtime state. #81035
Comments
(also see bpo-24554) We need to move GC state from _PyRuntimeState to PyInterpreterState. |
It's now done in the future Python 3.9! |
I reopen the issue, the change introduced a reference leak :-( Example: $ ./python -m test -R 3:3 test_atexit -m test.test_atexit.SubinterpreterTest.test_callbacks_leak
0:00:00 load avg: 1.12 Run tests sequentially
0:00:00 load avg: 1.12 [1/1] test_atexit
beginning 6 repetitions
123456
......
test_atexit leaked [3988, 3986, 3988] references, sum=11962
test_atexit leaked [940, 939, 940] memory blocks, sum=2819
test_atexit failed == Tests result: FAILURE == 1 test failed: Total duration: 466 ms It seems like each _testcapi.run_in_subinterp("pass") call leaks 3988 references. I tried tracemalloc to see where the memory allocation are done, but tracemalloc reports a single Python line: the _testcapi.run_in_subinterp() call... I tried to follow the increase of references using a watchpoint in gdb on _Py_RefTotal, but it takes a lot of time to follow each Py_INCREF/Py_DECREF knowning that we are talking aobut around 4,000 references. |
Even if the test is simplified to the following code, it does still leak: def test_callbacks_leak(self):
_testcapi.run_in_subinterp("pass") |
The following patch fix it: diff --git a/Python/pylifecycle.c b/Python/pylifecycle.c
index 7591f069b4..f088ef0bce 100644
--- a/Python/pylifecycle.c
+++ b/Python/pylifecycle.c
@@ -1210,6 +1210,15 @@ finalize_interp_clear(PyThreadState *tstate)
{
int is_main_interp = _Py_IsMainInterpreter(tstate);
+ _PyImport_Cleanup(tstate);
+
+ /* Explicitly break a reference cycle between the encodings module and XXX */
+ PyInterpreterState *interp = tstate->interp;
+ Py_CLEAR(interp->codec_search_path);
+ Py_CLEAR(interp->codec_search_cache);
+ Py_CLEAR(interp->codec_error_registry);
+ _PyGC_CollectNoFail();
+
/* Clear interpreter state and all thread states */
PyInterpreterState_Clear(tstate->interp);
@@ -1640,7 +1649,6 @@ Py_EndInterpreter(PyThreadState *tstate)
Py_FatalError("Py_EndInterpreter: not the last thread");
}
- _PyImport_Cleanup(tstate);
finalize_interp_clear(tstate);
finalize_interp_delete(tstate);
} Py_NewInterpreter() indirectly calls "import encodings" which calls codecs.register(search_function). This encodings function is stored in interp->codec_search_path and so keeps encodings module dict alive. _PyImport_Cleanup() removes the last reference to the encodings *module*, but the module deallocator function (module_dealloc()) doesn't clear the dict: it only removes its strong reference to it ("Py_XDECREF(m->md_dict);"). interp->codec_search_path is cleared by PyInterpreterState_Clear() which is called by Py_EndInterpreter(). But it is not enough to clear some objets. I'm not sure if encodings module dict is still alive at this point, but it seems like at least the sys module dict is still alive. I can push my workaround which manually "break a reference cycle" (really? which one?), but I may be interested to dig into this issue to check if we cannot find a better design. _PyImport_Cleanup() and _PyModule_Clear() functions are fragile. They implement smart heuristics to attempt to keep Python functional as long as possible *and* try to clear everything. The intent is to be able to log warnings and exceptions during the Python shutdown, for example. The problem is that the heuristic keeps some objects alive longer than expected. For example, I would expect that _PyImport_Cleanup() not only calls sys.modules.clear(), but also clears the dict of each module (module.__dict__.clear()). It doesn't, and I'm not sure why. |
I close again the issue ;-) |
Thanks so much for getting this done, Victor!
Should we have an issue open for finding a better solution? Are there risks with what you did that we don't want long-term? |
Did I mention that you're my hero? :) |
Victor> I'm not fully happy with this solution Eric> Should we have an issue open for finding a better solution? Are there risks with what you did that we don't want long-term? Pablo made a small changes in my workaround, by calling _PyGC_CollectNoFail() after PyInterpreterState_Clear(). I tried to avoid that, since I consider that no arbitrary Python code should be called after PyInterpreterState_Clear(), whereas the GC can trigger arbitrary __del__() methods implemented in pure Python. See discussion at #17457 Each time I tried to fix a bug in the Python finalization, I introduced worse bugs :-D We cannot fix all bugs at once, we have to work incrementally. I like the idea of introducing workarounds specific to subinterpreters: leave the code path for the main interpreter unchanged. It helps to iterate on the code to slowly fix the code. I prefer to not open an issue, since the Python finalization is broken is so many ways :-D Anyway, I'm hitting issues on the finalization each time I'm working on subinterpeter changes, so it's hard to forget about it :-) I started to take notes at: |
Eric: You're welcome. I'm a believer that subinterpreters is one of the most realistic solution to make Python faster. I said it in my EuroPython keynote on CPython performance ;-) https://github.com/vstinner/talks/blob/master/2019-EuroPython/python_performance.pdf "Conclusion: PyHandle, tracing GC and subinterpreters are very promising!" |
On Wed, Dec 4, 2019 at 4:36 AM STINNER Victor <report@bugs.python.org> wrote:
:)
+1
+1
Sounds good. :) On Wed, Dec 4, 2019 at 4:39 AM STINNER Victor <report@bugs.python.org> wrote:
Again, thanks for that! |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: