[Python-3000] pep 3124 plans (original) (raw)

Phillip J. Eby pje at telecommunity.com
Sat Jul 21 20:16:57 CEST 2007


At 10:55 PM 7/20/2007 -0700, Talin wrote:

You mentioned earlier that there was a design reason for preferring @overload and @when vs. the earlier RuleDispatch syntax, but the explanation you gave wasn't very clear (to me anyway).

(I personally prefer the @somegeneric.overload, but that's purely an aesthetic value judgement - if there's a strong architectural advantage of the other syntax, I'd like to hear it.)

You can't add new method combinations that way. If method combining is a function, not a method, then you can add as many new method types as you like. If you have to use @somegeneric.before and @somegeneric.after, you can't decide on your own to add @somegeneric.debug.

However, if it's @before(somegeneric...), then you can add @debug and @authorize and @discount and whatever else you need for your application, without needing to monkeypatch them in.

To me, TOOOWTDI means that all (or nearly all) the decorators should follow the same pattern.

Right. There are two reasons that I think that post-hoc overloading runs into problems. The first, as you mentioned, is that it's difficult to implement without some kind of trickery.

Well, that depends on what you define as "trickery", but clearly Guido feels that being able to overload an existing function without having to go through the code of every possible client is indeed "trickery".

IMO, however, going through the code of the clients is an unreasonable and unscalable task that goes against the whole point of the exercise: to "assert qualified statements over oblivious code" (one common definition of aspect-oriented programming). If I have to go through all the code that might have imported the function and stored it somewhere, that's hardly oblivious. It creates an opportunity for invisible, import sequence-dependent bugs, that can be reintroduced any time somebody changes an import statement!

So the irony, IMO, of avoiding this "trickery" is that it makes the practice error-prone, thereby providing a self-fulfilling justification for avoiding its use. (Whereas, if the "trickery" were allowed, it would be much safer to actually use it.)

All that having been said, I'm still willing to make an implementation that does it Guido's way. I just don't agree that the restriction is justified. But more on that below.

The second reason - this is my opinion - is that it too much resembles the mythical "comefrom" statement (the opposite of "goto"). The "comefrom" statement is intended to be a joke - the worst possible language feature from the standpoint of being able to manually trace the flow of execution of a program.

Well, I've worked with people who dislike OO for exactly the same reason, since they feel they can never know whether a method might have been overridden in a subclass. Seriously!

However, for the specific use cases I have in mind, you'd be using oblivious extension to implement customer-specific business rules, layered atop a core framework. You don't want to waste time declaring everything overloadable, any more than you declare classes to be subclassable! You just need to be able to write the customer's rules in one place. So if you're trying to follow something manually, you're going to look at that customer's business rule modules in order to know about the exceptional control flow.

I don't think that's really comparable to the joke implementation of "come from". In any system, the more the computer does for you, the harder it will be for you to mentally emulate what the computer's doing, step-by-step. That's simply the nature of the beast.

However, in the case of rule-based declarative abstractions, you're getting closer to something that's easier for the brain to model. Our brains run by pattern recognition, with more-specific patterns taking precedence, so this is an easier model for your brain to follow than step-by-step computation anyway. Certainly, it's an easier model for your software customers to provide you with in the first place.

I.e., customers usually don't give you a step-by-step, "well, first I check if the customer has an outstanding balance before I ship them anything." They say, "Don't ship stuff to people with an outstanding balance."

And guess what? Viewed formally, that's a "come from" statement.

So the most straightforward expression of typical business rules and requirements, is going to consist of a list of come-froms. So coding them that way actually gets us more verifiable requirements, and a simpler mental model to produce the code in the first place.

One issue that hasn't been satisfactorily resolved is the handling of the 'self' parameter. At least, let me give my explanation of what I think the issue is and see if we're on the same page:

Overloading a class method requires special treatment of the 'self' parameter because there's an implicit constraint on what types of objects can be passed as 'self': for any method defined in any class, the 'self' parameter must be an instance of the class (or a subclass) in which the method is defined. Now, this would be trivial if we required the programmer to explicitly declare the type of 'self', but this violates DRY and has the potential to cause mischief if the programmer forgets to update the method signature when they change the class.

Well, actually that never occurred to me, because obviously you can't do that (refer to the class before it's finished being defined). :)

If it turns out that there's no way to get a callback when the class has finished being built, then we may have to defer finishing the construction until the first time the generic function is called. This wouldn't be too bad, considering that there's a bunch of other stuff that is lazily calculated on first call anyway, from what I understand.

Actually, this isn't anywhere near as complicated as all the stuff I just snipped from the above. :) All that matters is whether the decorator is invoked in the body of a class. If it is, it needs a callback to finish the job. If it isn't, it can immediately go ahead with what it's doing.

Note that this was implemented in RuleDispatch literally years ago; it's only the loss of metaclass that presents a problem for a Py3K implementation.



More information about the Python-3000 mailing list