-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add an id field to PyInterpreterState. #73288
Comments
Currently there isn't any way to uniquely identify an interpreter. This patch adds a new "id" field to the PyInterpreterState struct. The ID for every new interpreter is set to the value of an increasing global counter. That means that the ID is unique within the process. IIRC, the availability of unique ID would help tools that make use of subinterpreters, like mod_wsgi. It is also necessary for any effort to expose interpreters in Python-level code (which is the subject of other ongoing work). The patch also adds: unsigned long PyInterpreterState_GetID(PyInterpreterState *interp) Note that, without a Python-level interpreters module, testing this change is limited to extending the existing test code in test_capi. |
Why not use just the pointer to PyInterpreterState itself? |
Pointers can get re-used, so they aren't temporally unique. |
What is the use case of keeping the uniqueness after deleting an interpreter? |
Tracking purposes mainly, so someone outside the interpreter state can tell when it's no longer there. Making interpreter states weak-referencable would have a similar effect, and could very well use this id if we didn't need the callback. |
If add an API for getting an unique ID of the interpreter state, is it needed to add an API for getting the interpreter state by ID? |
Three reasons come to mind:
Since PyInterpreterState is not a PyObject, using weakrefs to address the third point won't work, right? |
There is an issue with integer identifiers of threads. See bpo-25658 and https://mail.python.org/pipermail/python-ideas/2016-December/043983.html. |
That's an issue with TLS initialisation, not thread IDs. It's easily solved by defining an "uninitialized" value (e.g. 0) and an "invalid" value (e.g. -1). Interpreter states are in a linked list, so you can traverse the list to find one by ID. WRT weakrefs, we can't use them directly, but I suspect the higher-level API will need it. Possibly adding a callback on finalisation would fill both needs, but I like having a reliable ID - otherwise we'll probably end up with multiple different IDs managed indirectly via callbacks. (Perhaps a single callback for when any interpreter is finalized that passes the ID through? That should work well, since the ID is designed to outlive the interpreter itself, so it can be an asynchronous notification.) |
Interpreter states are in a linked list, so you can traverse the list to find one by ID. Exactly. At first I had added a PyInterpreterState_FindByID() or something WRT weakrefs, we can't use them directly, but I suspect the higher-level Everything you said about weakrefs sounds good. We can discuss more when |
+1 from me for the general idea. One subtlety with the draft implementation is that an Initialize/Finalize cycle doesn't reset the counter, which:
What do you think about resetting the counter back to 1 in Py_Initialize? |
Wouldn't this break the main property of IDs, the uniqueness? |
It depends on the scope of uniqueness we're after. If we wanted to track "Which Initialize/Finalize cycle is this?" *as well*, it would make more sense to me to have that as a separate "runtime" counter, such that the full coordinates of the current point of execution were:
I'll also note that in the threading module, the main thread is implicitly thread 0 (but named as MainThread) - Thread-1 is the first thread created via threading.Thread. So it may make sense to use a signed numeric ID, with 0 being the main interpreter, 1 being the first subinterpreter, and negative IDs being errors. |
If we bump it up to a 64-bit ID then it'll be no worse than how we track all dict mutations. |
Sounds good to me. When I was working on the patch I had the idea in the back of my mind that not resetting the counter would better support interpreter separation efforts in the future. However, after giving it some thought I don't think that's the case. So resetting it in Py_Initialize() is fine with me.
I had considered that and went with an unsigned long. 0 is used for errors, and starting at 1, which effectively means the main interpreter is always 1. If we later run into overflow issues then we can sort that out at that point (e.g. by moving to a 64-bit int or even a Python int). I'll add comments to the patch regarding these points. |
Here's the updated patch. |
The concern I have with using an unsigned value as the interpreter ID is that it's applying the "NULL means an error" idiom or the "false means an error" idiom to a non-pointer and non-boolean return type, whereas the common conventions for integer return values are:
If we were to use int_fast32_t for IDs instead, then any negative value can indicate an error, and the main interpreter could be given ID 0 to better align with the threading.Thread naming scheme. Whether we hit runtime error at 2 billion subinterpreters or 4 billion subinterpreters in a single process isn't likely to make much difference to anyone, but choosing an idiosyncratic error indicator will impact everyone that attempts to interact with the API. |
I fully expect subinterpreters to have a serious role in long running applications like web servers or other agents (e.g. cluster nodes), so I'd rather just bite the bullet and take 64-bits now so that we can completely neglect reuse issues. Otherwise we'll find ourselves adding infrastructure to hide the fact that you may see the same id twice. Another four bytes is a cheap way to avoid an entire abstraction layer. |
Yeah, I'm also fine with using int_fast64_t for the subinterpreter count. The only thing I'm really advocating for strongly on that front is that I think it makes sense to sacrifice the sign bit in the ID field as an error indicator that provides a more idiomatic C API. |
int_fast64_t it is then. :) I vacillated between the options and went with the bigger space. However, you're right that following convention is worth it. |
I've updated the patch to address Nick's review. Thanks! |
I would prefer to not use "fast" C types because they are not well supported. For example, ctypes has ctypes.c_int64 but no ctypes.c_int_fast64. Previous work adding an unique identifier: PEP-509 |
Thanks for pointing that out, Victor. Given the precedent I switched to using int64_t. The patch actually uses PY_INT64_T, but I didn't see a reason to use int64_t directly. FWIW, there *are* a few places that use int_fast64_t, but they are rather specialized and I didn't want this patch to be a place where I had to deal with setting a more general precedent. :) |
What the status of this issue Eric? Do you still need interpreter ID? |
Yes, I still need it. :) |
This change added a compiler warning. ./Programs/_testembed.c: In function ‘print_subinterp’: |
Thanks for pointing this out, Serhiy. I'll take a look in the morning. |
Does someone know the PRxxx constant for int64_t? |
Apparently it is PRId64. |
(see bpo-30447) |
I've fixed the compiler warning via d1c3c13. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: