Message 151739 - Python tracker (original) (raw)

In this patch, rather than reset the count each time, I keep track of the total number of calls to insertdict() that have happened for each "large dict" (i.e. for which ma_table != ma_smalltable), and the total number of probe iterations that have been needed to service those insertions/overwrites. It raises the exception when the number of probe iterations per insertion exceeds a threshold factor (rather than merely comparing the number of iterations against the current ma_used of the dict).

This sounds much more robust than the previous attempt.

When attacked, this leads to exceptions like this: AlgorithmicComplexityError: dict construction used 1697 probes whilst performing 53 insertions (len() == 58) at key 58 with hash 42

We'll have to discuss the name of the exception and the error message :)

Caveats:

How do you suggest to fix it?

I'd make the threshold factor a constant, e.g. 64 or 128 (it should not be too small, to avoid false positives). We're interested in the actual slowdown factor, which a constant factor models adequately. It's the slowdown factor which makes a DOS attack using this technique efficient. Whether or not dict construction truely degenerates into a O(n**2) operation is less relevant.

There needs to be a way to disable it: an environment variable would be the minimum IMO. Also, in 3.3 there should probably be a sys function to enable or disable it at runtime. Not sure it should be backported since it's a new API.