[Python-Dev] PEP 3103: A Switch/Case Statement (original) (raw)

Guido van Rossum guido at python.org
Mon Jun 26 22:06:24 CEST 2006


On 6/26/06, Ka-Ping Yee <python-dev at zesty.ca> wrote:

On Mon, 26 Jun 2006, Guido van Rossum wrote: > I've written a new PEP, summarizing (my reaction to) the recent > discussion on adding a switch statement. While I have my preferences, > I'm trying to do various alternatives justice in the descriptions.

Thanks for writing this up! The section that most draws my attention is "Semantics", and i guess it isn't a surprise to either of us that you had the most to say from the perspective you currently support (School II). :) Let me suggest a couple of points to add: - School I sees trouble in the approach of pre-freezing a dispatch dictionary because it places a new and unusual burden on the programmer to understand exactly what kinds of case values are allowed to be frozen and when the case values will be frozen.

Can you please edit the PEP yourself to add this? That will be most efficient.

- In the School II paragraph you say "Worse, the hash function might have a bug or a side effect; if we generate code that believes the hash, a buggy hash might generate an incorrect match" -- but that is primarily a criticism of the School II approach, not of the School I approach as you have framed it. It's School II that mandates that the hash be the truth.

You seem to misunderstand what I'm saying or proposing here; admittedly I think I left something out. With school I, if you want to optimize using a hash table (as in PEP 275 Solution 1) you have to catch and discard exceptions in hash(), and a bug in hash() can still lead this optimization astray: if A == B but hash(A) != hash(B), "switch A: // case B: ... // else: ..." may falsely take the else branch, thereby causing a hard-to-debug difference between optimized and unoptimized code. With school II, exceptions in hash() aren't caught or discarded; a bug in hash() leads to the same behavior as optimized school I, but the bug is not dependent on the optimization level.

(It looks to me like what you're actually criticizing here is based on some assumptions about how you think School I might be implemented, and having taken School I a number of steps down that (unexplained) road you then see problems with it.)

Right. School I appears just as keen as school II to use hashing to optimize things, but isn't prepared to pay the price in semantics; but I believe the optimizations are impossible to behave completely identically to the unoptimized code (not even counting side effects in hash() or eq()) so I believe the position that the optimized version is equivalent to the unoptimized "official semantics" according to school I is untenable.

Also, why is the discussion of School II mostly an argument against School I? What about describing the advantages of each school?

School II has the advantage of not incurring the problems I see with school I, in particular catching and discarding exceptions in hash() and differences between optimized and unoptimized code.

-- --Guido van Rossum (home page: http://www.python.org/~guido/)



More information about the Python-Dev mailing list