Message 105586 - Python tracker (original) (raw)

  1. In constrained memory environments, creating a temporary internal copy of a large set may cause the difference operation to fail that would otherwise succeed.

It's a space/time tradeoff. There's nothing wrong about that. (do note that hash tables themselves take much more space than the "equivalent" list)

  1. The break-even point between extra lookups and a copy is likely to be different on different systems or even on the same system under different loads.

So what? It's just a matter of choosing reasonable settings. There are other optimization heuristics in the interpreter.

The optimization here looks ok to me.