New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PY_LLONG_MAX & co - preprocessor constants or not? #54534
Comments
Include/pyport.h invites potential compile errors with the definitions
#define PY_LLONG_MIN LLONG_MIN
#define PY_LLONG_MAX LLONG_MAX
#define PY_ULLONG_MAX ULLONG_MAX
which can fall back to gcc variants or to
#else
/* Otherwise, rely on two's complement. */
#define PY_ULLONG_MAX (~0ULL)
#define PY_LLONG_MAX ((long long)(PY_ULLONG_MAX>>1))
#define PY_LLONG_MIN (-PY_LLONG_MAX-1) Code developed with the former #definitions might use them in '#if's, It would be safer if either all the macros and pyconfig variants used The signed variants would break because '#if's do not accept casts. PY_ULLONG_MAX is more insidious: If it meets a host which supports You get the same effect with ~0U vs ~0UL on a 64-bit compiler,
and ~0U vs ~0ULL on a C99 compiler:
#if (~0U) == (~0ULL)
# error "oops"
#endif
Incidentally, the "two's complement" comment is wrong.
It also relies on unsigned long long being widest type with no
padding bits, and -LLONG_MAX-1 not being a trap representation.
~0ULL is not two's complement since it is unsigned, it works
because it has the same result as -1ULL which is defined to
have the max value.
The PY_LLONG_MIN definitions rely on two's complement. If
anyone cared, one could avoid that with
#define PY_LLONG_MIN (-PY_LLONG_MAX-(/*two's complement*/(-1LL & 3)==3))
Anyway. If they use casts, fix PY_TIMEOUT_MAX in 3.2a3 pythread.h:
#define PY_MIN(x, y) ((x) < (y) ? (x) : (y))
#define PY_TIMEOUT_MAXTMP instead of PY_TIMEOUT_MAX, and then
#ifndef NT_THREADS
#define PY_TIMEOUT_MAX PY_TIMEOUT_MAXTMP
#else
#define PY_TIMEOUT_MAX PY_MIN(Py_LL(0xFFFFFFFF)*1000, PY_TIMEOUT_MAXTMP)
#endif |
Thanks for the report; I agree that there's a potential issue here, and I also think that all these definitions *should* be preprocessor defines. (Idle question: does C99 require that LONG_MAX and friends are usable in the preprocessor? I see it in e.g. 7.18.2p2 for INT32_MAX and friends, but I'm not sure if there's something similar for LONG_MAX and co.) Can you suggest a suitable fix for the PY_ULLONG_MAX and PY_LLONG_MAX defines? Note that a configure-based fix may need to take into account the special needs of Windows---for which configure isn't used, of course---and OS X---where the same code, based on a single run of configure, should be able to compile to the correct thing both in 32-bit and 64-bit mode, so that universal builds work; see PC/pyconfig.h and Include/pymacconfig.h respectively for dealing with these issues. BTW, do you know of any modern non-Windows platforms that don't define LLONG_MIN and LLONG_MAX? It may well be that the "two's complement" fallback hasn't been exercised in recent years.
Agreed---that comment needs to be better. I think it's fine, though, for practical purposes to assume an absence of padding bits and no trap representation; IIRC there are places internally (e.g., in the bitwise operators section of the 'int' type implementation) that already assume two's complement + no padding bits + no trap representation. (I *thought* I had an old issue open somewhere about documenting---and possibly testing---these assumptions, but now I can't find it.) |
Mark Dickinson writes:
Indeed, my suggestion to castify everything for uniformity was silly: My
5.2.4.2.1p1: Sizes of integer types <limits.h>.
As far as I can tell, PC/pyconfig.h already solves it for Windows. For pyport.h, since you do #define SIZEOF_LONG_LONG: #define PY_LLONG_MAX \
(1 + 2 * ((Py_LL(1) << (CHAR_BIT*SIZEOF_LONG_LONG-2)) - 1))
#define PY_ULLONG_MAX (PY_LLONG_MAX * 2ULL + 1) You could check PY_ULLONG_MAX with a compile-time assertion if you want: #ifndef __cplusplus /* this requires different magic in C++ */
/* Compile-time assertion -- max one per post-preprocessed line */
#define Py_static_assert(expr) Py_static_assert1_(expr, __LINE__)
#define Py_static_assert1_(expr, line) Py_static_assert2_(expr, line)
#define Py_static_assert2_(expr, line) struct Py_static_assert##line { \
int Assert1_[(expr) ? 9 : -9]; int Assert2_: (expr) ? 9 : -9; }
Py_static_assert(PY_ULLONG_MAX == (unsigned long long)-1); #endif /* __cplusplus */
Anyting compiled with strict ANSI pre-C99 mode, e.g. gcc -ansi, which And unfortunately, even if Python is built without a compiler's Beyond that: No, I know none, but I don't know many platforms anyway.
I expect so, yes. It's easy to find breakage with non-two's complement, |
I wrote:
Eh, Py_ULL(2).
Just to be clear, I'm peeving at the comment, not the code. |
Here's a patch (against py3k) incorporating your suggestions. Would you be willing to review? [Removing Python 2.6 from versions since it's no longer maintained for non-security issues.) |
Mark Dickinson writes:
Looks fine to me. (Actually the gcc branch makes the same assumptions |
I wrote:
which also disables 'long long', so such examples are moot. duh. |
Hallvard B Furuseth writes:
Hold on.. #elif defined SIZEOF_LONG_LONG would be a bit safer than #else. |
No, PY_LLONG_MAX lacks a paren. Found by the revolutionary method of Index: Include/pyport.h --- Include/pyport.h (revision 86319)
+++ Include/pyport.h (working copy)
@@ -62,15 +62,20 @@
#define PY_LLONG_MAX LLONG_MAX
#define PY_ULLONG_MAX ULLONG_MAX
#elif defined(__LONG_LONG_MAX__)
-/* Otherwise, if GCC has a builtin define, use that. */
+/* Otherwise, if GCC has a builtin define, use that. (Definition of
+ * PY_LLONG_MIN assumes two's complement with no trap representation.) */
#define PY_LLONG_MAX __LONG_LONG_MAX__
-#define PY_LLONG_MIN (-PY_LLONG_MAX-1)
-#define PY_ULLONG_MAX (__LONG_LONG_MAX__*2ULL + 1ULL)
-#else
-/* Otherwise, rely on two's complement. */
-#define PY_ULLONG_MAX (~0ULL)
-#define PY_LLONG_MAX ((long long)(PY_ULLONG_MAX>>1))
-#define PY_LLONG_MIN (-PY_LLONG_MAX-1)
+#define PY_LLONG_MIN (-PY_LLONG_MAX - 1)
+#define PY_ULLONG_MAX (PY_LLONG_MAX * Py_ULL(2) + 1)
+#elif defined(SIZEOF_LONG_LONG)
+/* Otherwise compute from SIZEOF_LONG_LONG, assuming two's complement, no
+ padding bits, and no trap representation. Note: PY_ULLONG_MAX was
+ previously #defined as (~0ULL) here; but that'll give the wrong value in a
+ preprocessor expression on systems where long long != intmax_t. */
+#define PY_LLONG_MAX \
+ (1 + 2 * ((Py_LL(1) << (CHAR_BIT * SIZEOF_LONG_LONG - 2)) - 1))
+#define PY_LLONG_MIN (-PY_LLONG_MAX - 1)
+#define PY_ULLONG_MAX (PY_LLONG_MAX * Py_ULL(2) + 1)
#endif /* LLONG_MAX */
#endif
#endif /* HAVE_LONG_LONG */ |
Fixed in r86552. Thanks! |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: