[Python-Dev] Status of the fix for the hash collision vulnerability (original) (raw)

Glenn Linderman v+python at g.nevcal.com
Sat Jan 14 03:09:33 CET 2012


On 1/13/2012 5:35 PM, Victor Stinner wrote:

- Glenn Linderman proposes to fix the vulnerability by adding a new "safe" dict type (only accepting string keys). His proof-of-concept (SafeDict.py) uses a secret of 64 random bits and uses it to compute the hash of a key. We could mix Marc's collision counter with SafeDict idea (being able to use a different secret for each dict): use hash(key, secret) (simple example: hash(secret+key)) instead of hash(key) in dict (and set), and change the secret if we have more than N collisions. But it would slow down all dict lookup (dict creation, get, set, del, ...). And getting new random data can also be slow. SafeDict and hash(secret+key) lose the benefit of the cached hash result. Because the hash result depends on a argument, we cannot cache the result anymore, and we have to recompute the hash for each lookup (even if you lookup the same key twice ore more). Victor

So integrating SafeDict into dict so it could be automatically converted would mean changing the data structures underneath dict. Given that, a technique for hash caching could be created, that isn't quite as good as the one in place, but may be less expensive than not caching the hashes. It would also take more space, a second dict, internally, as well as the secret.

So once the collision counter reaches some threshold (since there would be a functional fallback, it could be much lower than 1000), the secret is obtained, and the keys are rehashed using hash(secret+key). Now when lookups occur, the object id of the key and the hash of the key are used as the index and hash(secret+key) is stored as a cached value. This would only benefit lookups by the same object, other objects with the same key value would be recalculated (at least the first time). Some limit on the number of cached values would probably be appropriate.
This would add complexity, of course, in trying to save time.

An alternate solution would be to convert a dict to a tree once the number of collisions produces poor performance. Converting to a tree would result in O(log N) instead of O(1) lookup performance, but that is better than the degenerate case of O(N) which is produced by the excessive number of collisions resulting from an attack. This would require new tree code to be included in the core, of course, probably a red-black tree, which stays balanced.

In either of these cases, the conversion is expensive, because a collision threshold must first be reached to determine the need for conversion, so the hash could already contain lots of data. If it were too expensive, the attack could still be effective.

Another solution would be to change the collision code, so that colliding keys don't produce O(N) behavior, but some other behavior.
Each colliding entry could convert that entry to a tree of entries, perhaps. This would require no conversion of "bad dicts", and an attack could at worst convert O(1) performance to O(log N).

Clearly these ideas are more complex than adding randomization, but adding randomization doesn't seem to be produce immunity from attack, when data about the randomness is leaked. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20120113/cf249f6c/attachment.html>



More information about the Python-Dev mailing list