-
-
Notifications
You must be signed in to change notification settings - Fork 31.7k
Hash function is not randomized properly #58826
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Fix of this http://bugs.python.org/issue13703 is broken. tl;dr: There only 256 different hash functions (compare it to size of _Py_HashSecret prefix and suffix). And whether keys collide or not depends only on the last 8 bits of prefix. Problem with current randomization of hash function is following: To make a DoS attack, attacker must do the following: |
E.g this strings collide for every prefix ending on 0xcd: |
Thanks for filing this bug report. I'm not seeing the equal hashes you describe. I'm using this recipe to hardcode a specific prefix and print the hashes using it: On a 32-bit build of Python 2.7.3 (i686), if I set _Py_HashSecret.prefix=0xcdcdcdcd, I get non-equal hashes for the data you specify (output trimmed somewhat for conciseness): $1 = {prefix = 0, suffix = 0} Similarly, on a 64-bit build of Python 2.7.3 (x86_64), I get non-equal hashes: Did I misunderstand the report? Thanks. |
I don't understand this issue: can you write a short script to test a collision? "E.g this strings collide for every prefix ending on 0xcd" Do you mean that prefix & 0xff == 0xcd? "0x27fd5a18, 0x26fe78fa" Is it a byte string or an Unicode string? b'\x27\xfd\x5a\x18' and b'\x26\xfe\x78\xfa'? -- Using PYTHONHASHSEED environment variable, it's easy to find two values generating the same _Py_HashSecret. Just one example: PYTHONHASHSEED=3035016679:
* _Py_HashSecret = {0xcd5192eff3fd4d58, 0x3926b1431b200720}
PYTHONHASHSEED=4108758503:
* _Py_HashSecret = {0xcd5192eff3fd4d58, 0x3926b1431b200720} -- I wrote find_hash_collision.py to try to compute a collision, but the programs fail with: See attached program: it generates a random seed. Uncomment "seed = generate_seed_0xCD()" if the prefix must ends with 0xCD byte. |
My bad (I checked only function in C++, not result in python). |
For example take this script (on 32bit): And run following: It gives collison too many times (around 9 out of 2500). |
$ gdb --eval-command="break _PyRandom_Init" --eval-command="run" --eval-command="print _Py_HashSecret" --eval-command="set _Py_HashSecret.prefix=0xcdcdcd00" --eval-command="print _Py_HashSecret" --eval-command="continue" -eval-command="continue" --args python -c 'a="\x00\xcf\x0b\x96\x19"; b="\x00\x6d\x29\x45\x18"; print(hash(a)); print(hash(b))' On 32-bit: On 64-bit: |
I tried this script on Linux 32 bits and Linux 64 bits: I didn't see any collision. What is your operating system and the version of your operating system please? |
@dmalcolm: With the fresh pair of values Vlado provided, I managed to reproduce the collisions on Python 2.7.3, i686 (output trimmed like you did): for __PREFIX in 0x0 0x4700 0xdead00 0xcafe00; do gdb --eval-command="break _PyRandom_Init" --eval-command="run" --eval-command="print _Py_HashSecret" --eval-command="set _Py_HashSecret.prefix=${__PREFIX}" --eval-command="set _Py_HashSecret_Initialized=1" --eval-command="print _Py_HashSecret" --eval-command="continue" -eval-command="continue" --args ./python -c "a='\x00\xcf\x0b\x96\x19'; b='\x00\x6d\x29\x45\x18'; print(hash(a)); print(hash(b))"; done $1 = {prefix = 0, suffix = 0} $1 = {prefix = 0, suffix = 0} $1 = {prefix = 0, suffix = 0} $1 = {prefix = 0, suffix = 0} |
uname -a Anyway you should be able to do following (in 32bits):
|
hash.py: Python implementation of the 32-bit version of hash(bytes). Ok, I now see that only the lower 8 bits are really mixed with the input string. |
One possible fix: It seems not to have same collisions with many different hash seeds. |
Well, it was decided to not impact performances to workaround one specific class of attack, whereas there are many other ways to DoS Python. So we chose to keep the main loop unchanged. Randomizing the hash is not a fix for the hash DoS, it only makes the attacker harder. |
Oops, I attached the wrong "hash.py" file. |
We should see if more security can be gained without sacrificing speed. |
Yes, the suffix is used to "protect" the secret. Without the suffix, it would be too simple to compute the prefix: getting a single hash value of a known string would leak the prefix.
I don't know if we can do better and/or if it is a critical issue. |
I've modified unicodeobject's unicode_hash() function. V8's algorithm is about 55% slower for a 800 MB ASCII string on my box. Python's current hash algorithm for bytes and unicode: while (--len >= 0) $ ./python -m timeit -s "t = 'abcdefgh' * int(1E8)" "hash(t)"
10 loops, best of 3: 94.1 msec per loop V8's algorithm: while (--len >= 0) {
x += (Py_uhash_t) *P++;
x += ((x + (Py_uhash_t)len) << 10);
x ^= (x >> 6);
} $ ./python -m timeit -s "t = 'abcdefgh' * int(1E8)" "hash(t)"
10 loops, best of 3: 164 msec per loop |
Just to make it extra clear: Vlado showed that the "-R" switch of Python can easily be made fully pointless, with only a bit of extra work. Here is how:
It's interesting to note how this whole -R discussion made very long threads on python-dev, and python-dev has subsequently ignored (for the past 6 months!) the fact that their "fix" can be worked around in a matter of minutes. (For information, I'm sure that if the algorithm is improved to depend on all 32 or 64 bits of the prefix, it would still be easy to crack it. You don't actually need to send 2**32 or 2**64 requests to the web server: with careful design you can send only 32 or 64 requests that each leak one bit of information. Doing that requires a bit more research, but once the recipe is known, it can be freely reused, which seems to defeat the original point.) |
For reference, the above means that we can implement -R support for PyPy as a dummy ignored flag, and get "security" that is very close to CPython's. :-) |
That doesn't make it any easy CPython issue. :) |
As far as my understanding goes the issue can't be solved with our current hash algorithm. We'd have to use a crypto hash function or at least a hash algorithm that has an increased avalanche effect on the outcome. The current hash algorithm is designed and optimized for speed and not for security. Any other algorithm is going to slow down hashing. Small strings and strings with lots of NUL bytes may leak too many information, too. |
No, this issue has no been ignored. Nobody proposed anything to fix this In my opinion, we cannot solve this issue without slowing down python. Or I How do other languages (say Perl, Ruby, PHP, Javascript) handle this issue? |
I got another numbers (32-bit Linux, AMD Athlon 64 X2 4600+). Python's current hash algorithm: V8's algorithm: |
On 21.10.2012 23:42, STINNER Victor wrote:
Well, I did propose a different approach to the whole problem to The proposal was shot down with the argument that it wouldn't It should also be noted that the randomization only applies to Perhaps it's time to revisit the collision counting idea ? It would work in much the same way as the stack recursion limit -- Professional Python Services directly from the Source (#1, Oct 22 2012)
>>> Python Projects, Consulting and Support ... http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
________________________________________________________________________
2012-09-27: Released eGenix PyRun 1.1.0 ... http://egenix.com/go35
2012-09-26: Released mxODBC.Connect 2.0.1 ... http://egenix.com/go34
2012-09-25: Released mxODBC 3.2.1 ... http://egenix.com/go33
2012-10-23: Python Meeting Duesseldorf ... tomorrow eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 |
Benjamin: oups, sorry. I don't remember setting the "easy" keyword, my mistake. Fwiw I'm +1 on Marc-Andre's solution. Make it a tunable setting, e.g. with sys.setcollisionlimit(). Defaults to sys.maxint on existing Pythons and some smaller value (70?) on new Pythons. It has the same benefits as the recursion limit: it's theoretically bad, but most of the time very useful. It would also crash on bad usages of custom __hash__() methods: e.g. if you put a lot of keys in a dict, all with a custom __hash__() that returns 42. I imagine that it can be considered a good thing to raise in this case rather than silently degrade performance forever. |
Here's the message that helped convince us to go against collision counting originally: http://mail.python.org/pipermail/python-dev/2012-January/115726.html |
How about using a secure hash algorithm that's implemented in HW when available. It doesn't eliminate the issue on systems that lack this support but at least it limits the scope of the problem. Of course some testing would need to be done to make sure the hardware hashing doesn't have a significant impact on performance. |
René, a balanced tree requires rebalancing on every (or almost every) item for some special (sorted) data sequences. |
On Fri, Nov 30, 2012 at 08:06:32PM +0000, Serhiy Storchaka wrote:
That's perfectly true and it holds for most unsorted sequences as |
On 30.11.2012 21:06, Serhiy Storchaka wrote:
Sure, but that's still O(N*log N) for an attack scenario, not O(N^2). I think the main point here is that using hash tables or dictionaries Developers need to be made aware of the problem and given data If a developer has to build a lookup table from untrusted data, try: instead of: # Hmm, let's hope this doesn't bomb...
mapping = insert_untrusted_data(source) At the moment, the best thing you can do is insert the data If we make the collision counting limit a per-dictionary adjustable try: Aside: It's better to know you worst case and program for it, rather -- Professional Python Services directly from the Source (#1, Nov 30 2012)
>>> Python Projects, Consulting and Support ... http://www.egenix.com/
>>> mxODBC.Zope/Plone.Database.Adapter ... http://zope.egenix.com/
>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/
________________________________________________________________________
2012-11-28: Released eGenix mx Base 3.2.5 ... http://egenix.com/go36
2013-01-22: Python Meeting Duesseldorf ... 53 days to go ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 |
Serhiy Storchaka: Yes, but it is still O(log n) worst case. Even in the worst case rebalancing, you only need to walk up/down rotating/spliting every node in your path. As the tree height is guaranteed to be x * log n (x from 1 to 2, depending on the algorithm), the rebalancing operation is aways limited by O(log n). |
Agree. However I think that for large enough data a balanced tree is slower |
May be use a more general solution? try: (You can can use different measurement for timeout: user time, real time, ticks |
On 30.11.2012 22:27, Serhiy Storchaka wrote:
Sure, as long as there's a way to break into the execution, The problem is that you'd have to check for the timeout at some Collision counting is one such method of detecting that As long as you know that there are places in your code that can The dictionary implementation is one such place, where you If there's a way to solve all these things in general |
Why not redefine -R to mean "use secure hashing algorithms for built-in types"? When specified, use hashing algorithms that are secure against denial-of-service and other known attacks, at the possible expense of performance. When not specified, use whatever hashing algorithms provide the most sensible defaults for every-day use (basically hash the way python currently hashes). Secure hashing would apply not just to strings but to numeric and other types as well. This would break the invariant of |
According to talk at 29c3: http://events.ccc.de/congress/2012/Fahrplan/events/5152.en.html Quote: We also describe a vulnerability of Python's new randomized hash, allowing an attacker to easily recover the 128-bit secret seed. As a reliable fix to hash-flooding, we introduce SipHash, a family of cryptographically strong keyed hash function competitive in performance with the weak hashes, and already adopted in OpenDNS, Perl 5, Ruby, and in the Rust language. |
Thanks for the information! I'm working on a PEP for the issue at hand. |
Bob, the hash invariant isn't a mere implementation detail, it is critical to making hash based data structures work properly - if two equal objects (say the integer zero and the float zero) ever end up in different hash bins, then the uniqueness property of dictionary keys and sets breaks down. The three proposed mitigation strategies (using SipHash for string hashing, a tunable collision counting hash map and providing a non-hash based mapping container in the standard library) are all reasonable approaches to the problem and, most importantly, they're *orthogonal* approaches to the problem. There's nothing stopping us doing all three if someone is willing and able to provide the code. |
Il giorno 02/gen/2013, alle ore 00:20, Domen Kožar <report@bugs.python.org> ha scritto:
That is exactly the vulnerability that was previously mentioned in the context of this bug. SipHash is currently the only solution for a collision-resistant fast-enough hash.Giovanni Bajo |
Il giorno 02/gen/2013, alle ore 06:52, Christian Heimes <report@bugs.python.org> ha scritto:
Since you're collecting ideas on this, I would like to stress that, in the Python 3 transition, it was deemed acceptable to switch all objects to use unicode strings for attribute names, making the hash computation of such attributes (in the context of the instance dictionary) at least twice as slow than it used to be (the 'at least' refers to the fact that longer strings might have even worse effects because of a higher number of cache misses). SipHash isn't twice as slow as the current hash function, not even for short strings. So there is a precedent in slowing down the hash computation time in a very important use case, and it doesn't look like hell froze over.Giovanni Bajo |
2013/1/2 Giovanni Bajo <report@bugs.python.org>:
It's probably not to bad for attribute names because a) they're short |
Giovanni, why do you think that hashing of unicode strings is slower than byte strings? First of all ASCII only unicode strings are packed and use just one byte per char. CPython's FNV implementation processes one element in each cycle, that is one byte for bytes and ASCII unicode, two bytes for UCS-2 and four bytes for UCS-4. Bytes and UCS-4 strings require the same amount of CPU instructions. |
Il giorno 02/gen/2013, alle ore 19:51, Christian Heimes <report@bugs.python.org> ha scritto:
Ah sorry, I stand corrected (though packing wasn't there in 3.0, was it? I was specifically referred to the 2.x -> 3.0 transition).Giovanni Bajo My Blog: http://giovanni.bajo.it |
See microbenchmark results in bpo-16427. Really hashing of ASCII/UCS1 strings more than 2x slower than bytes hashing. |
The issue has been solved for Python 3.4 with the integration of PEP-456. |
This issue has belatedly had a CVE assigned: CVE-2013-7040 ("CPython hash secret can be recovered remotely") 3.4 is not affected (due to PEP-456), but 3.3 and 2.7 are still affected. |
I think that's just WONTFIX at this point. |
hello guys, i want to know that how to resolve 2.7.x problem, do you have any good plan?
|
Python 2.7 reached EOL over two years ago and is no longer supported by us. You either need to update to a supported version, fix security problems yourself, or contract a vendor. |
ok .thanks |
Wow, I'm impressed that people ask security questions about Python 2.7. Are you still using Python 2.7 in production? |
yes. our project started at 2010, it's so huge that we can not upgrade python2.7. it's terrible |
You can patch Python, but Python 2.7 no longer accept any change "upstream". The branch in Git has been removed. I suggest you starting porting your code base incrementally to Python 3, but this closed issue is the wrong place to discuss that (and I'm not available to help you). Good luck with that ;-) |
thanks |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: