[Python-Dev] Re: method decorators (PEP 318) (original) (raw)

Robert Mollitor mollitor at earthlink.net
Sun Mar 28 15:00:22 EST 2004


On Sunday, March 28, 2004, at 09:45 AM, Phillip J. Eby wrote:

At 12:54 PM 3/28/04 +0100, Paul Moore wrote:

Robert Mollitor <mollitor at earthlink.net> writes:

> It would be nice if transformer decorations were never allowed > "arguments". It would keep that list as short > and as tidy as possible. That's the sort of restriction I imagined that Guido was tending towards. While it's justifiable in this context, I would prefer to leave the option of using arguments available, in case someone comes up with a use where function attributes are inappropriate. It's inappropriate to use attributes of a function for attributes that logically belong to the decorator. For example 'synchronized(lockattr="baz")'. The 'lockattr' logically belongs to the synchronizing decoration. Declaring it in a separate location makes the whole thing harder to read/understand.

The following is an example in PEP 318:

def accepts(*types):
    def check_accepts(f):
        assert len(types) == f.func_code.co_argcount
        def new_f(*args, **kwds):
            for (a, t) in zip(args, types):
                assert isinstance (a, t), \
                    "arg %r does not match %s" % (a, t)
            return f(*args, **kwds)
        return new_f
    return f(*args, **kwds)

def returns(rtype):
    def check_returns(f):
        def new_f(*args, **kwds):
            result = f(*args, **kwds)
            assert isinstance(result, rtype), \
                "return value %r does not match %s" % (result,  rtype)
            return result
        return new_f
    return check_returns

that is two functions that return a function that returns a function.
Why? Because

def func(arg1, arg2) [accepts(int, (int, float)), returns((int, 

float))]: pass

expands roughly to "returns((int, float)) (accepts(int, (int, float)) (func))". Whether or not this is the best implementation, it is reasonable if you view the parameters as logically belonging to the decorator instead of logically belonging to the function. With transformer plus annotations, this could be recast as

def func [check_types] (arg1, arg2):
    :accepts (int, (int, float))
    :returns (int, float)
    pass

def check_types(f):
    if hasattr(f, 'accepts'):
        assert len(types) == f.func_code.co_argcount	
    def new_f(*args, **kwds):
        if hasattr(f, 'accepts'):
            for (a, t) in zip(args, f.accepts):
                assert isinstance (a, t), \
                    "arg %r does not match %s" % (a, t)
        result = f(*args, **kwds)
        if hasattr(f,'returns'):
            assert isinstance(result, f.returns), \
                "return value %r does not match %s" % (result,  f.returns)
        return result
    return new_f

As an added bonus, the function attributes are available for other inspecting operations such as generating documentation, say.

My point here is we may want to aim for making the information kept on the function object itself as rich as possible and make the "decorations" do the work of pulling information from whatever the function "publishes".

Even if you have a case like you mention of
'synchronized(lockattr="baz")', where perhaps you might want to say that nobody outside of this particular transformer implementation would ever want to know which attribute the function is synchronized on, there is a trivial workaround if we restrict the transformer list to identifiers:

    sync = synchronized(lockattr="baz")

    def func [sync] (arg1, arg2):
        pass

However, I think that in general people will decide to be "generous" and choose to publish those parameters as true function attributes on 'func', so this work-around should not be necessary.

It is true that there is a potential namespace issue with function attribute names, but this is a general multiple inheritance issue. In fact, it might not be a bad idea to view transformer decorations as "mix-ins" (though with non-trivial differences). Despite the fact that most of the examples in PEP 318 are functions, the only existing transformers, classmethod and staticmethod are NOT functions. They are types. In fact, it may turn out that in order to ensure commutability we may require that all transformers be types/classes that follow a specific pattern.

Now, if the transformers are classes, then they are not functions that return a function or (in the parameterized case) functions that return a function that returns a function.

A non-parameterized transformer would be something like

class check_types:
    def __init__ (self, f):
        self.f = f
    def __call__(self, < some syntax involving asterisks>):
        do something with self.f

A parameterized transformer could be something like

class synchronized:
    def __init__(self, lockattr):
        self.lockattr = lockattr
    def __call__(self, <again with the asterisks>):
        def new_f(f):
            do something with f
        return new_f

But there is perhaps one crucial difference, the non-parameterized one returns a non-function object instance that may have other methods besides call (to make the transformer play nice commutability-wise, say), whereas the parameterized one is returning a true function object (which would be treated by an outer transformer as a unwrapped function, probably). To make them analogous, you would need something like

class synchronized(self, lockattr):
    def __init__(self, lockattr):
        self.lockattr = lockattr
    def __call__(self, <...>):
        class synchronzied_wrapper:
            def __init__(self, f):
                self.f = f
            def __call__(self, <...>):
                do something with self.f

So "type(f)" (given "def f [synchonized(...)] (...): ...") would not be 'synchronized' but 'synchronized_wrapper'.

robt



More information about the Python-Dev mailing list