New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ctypes: problem with large integers #44869
Comments
Python 2.5 (r25:51908, Nov 24 2006, 11:03:50)
[GCC 3.4.4 20050721 (Red Hat 3.4.4-2)] on linux2
>>> from ctypes import *
>>> c_int(2**31).value
-2147483648
>>> c_long(2**32-1).value
-1 In a 64-bit build, the situation is even worse: >>> c_int(int(2**32)).value
0
>>> c_int(2**32).value
0
Another way to see the problem:
>>> c = CDLL(None)
>>> c.printf("%d\n", 2**32)
0
2 |
This works as designed. ctypes intentionally does no overflow checking when using the c_int, c_uint, and related integer types. Instead, only the available bits are used - just as in C. Closing as invalid (sorry). |
An issue remains with the implicit conversion: On a 64-bit platform (sizeof(long)=64): >>> c.printf("%d\n",1<<64)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ctypes.ArgumentError: argument 2: <type 'exceptions.OverflowError'>: long int too long to convert so it does do overflow checking, but >>> c.printf("%d\n",(1<<64)-1)
-1
3
>>> c.printf("%d\n",(1<<32))
0
2 |
I must say I do not care too much about the remaining issue. To be portable between 32-bit and 64-bit platforms you should define .argtypes anyway or explicitely wrap the arguments into ctypes instances, if setting .argtypes is not possible as for printf. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: