New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sys.setrecursionlimit: OverflowError still raised when int limited in sys.maxsize #85485
Comments
Consider this code: |
Needs to add 10 zeros (sys.maxsize//10000000000) to get it work. //20000000000 doesn't work. |
Tested. 2**31-31 is the max, which is 2147483617, compared to sys.maxsize's 9223372036854775807! |
The recursion depth and recursion limit are stored internally as C ints, so yes, 2**31 - 1 = 2147483647 is the maximum value that you can pass to But it's unclear why you'd ever want to set the recursion limit that high. What's your goal here? Are you looking for a way to effectively disable that recursion limit altogether? If so, can you explain the use-case? Note that the recursion limit is there to prevent your code from exhausting the C stack and segfaulting as a result: simply setting that limit to a huge value won't (by itself) allow you to run arbitrarily deeply nested recursions. On my machine, without the protection of the recursion limit, a simple recursive call segfaults at around a depth of 30800. |
Setting to pending; I don't see any bug here, and I suspect the original report was based on a misunderstanding of what sys.setrecursionlimit is for. |
Mark has already mentioned that setting the recursion limit to a Apart from that, sys.maxsize is actually documented like this: maxsize -- the largest supported length of containers. So it applies to Py_ssize_t (signed version of size_t), but not to |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: