[Python-Dev] Type hints -- a mediocre programmer's reaction (original) (raw)

Paul Sokolovsky pmiscml at gmail.com
Tue Apr 21 17:27:50 CEST 2015


Hello,

On Tue, 21 Apr 2015 16:11:51 +0200 Antoine Pitrou <antoine at python.org> wrote:

[]

>> You can't at the same time point out that type checking has no >> power or control over runtime behaviour, and then claim that type >> checking makes runtime behaviour (for example, ability to accept or >> reject certain types) saner. It is a trivial contradiction. > > I suspected there's either short-sightedness, or just a word play > for for a purpose of word play. > > Type annotation are NOT introduced for the purpose of static type > checking.

I was replying to Steven's message. Did you read it?

Yes. And I try to follow general course of discussion, as its hard to follow individual sub-threads. And for example yesterday's big theme was people blackmailing that they stop contributing to stdlib if annotations are in, and today's theme appear to be people telling that static type checking won't be useful. And just your reply to Steven was a final straw which prompted me to point out that static type checking is not a crux of it, but just one (not the biggest IMHO) usage.

> Runtime type checking (run as via "make test", not > as "production") is much more interesting to catch errors.

Obviously you're giving the word "runtime" a different meaning than I do. The type checker isn't supposed to actually execute the user's functions (it's not that it's forbidden, simply that it's not how it will work in all likelihood): therefore, it doesn't have any handle on what actually happens at runtime, vs. what is declared in the typing declarations.

Well, maybe "typechecker" is the wrong word then. I'm talking about instrumented VM which actually interprets type annotation while running bytecode - for example, on entry to a function, it takes argument annotations, and executes sequence of equivalent isinstance() checks, etc., etc. One won't use such instrumented VM at "production" runtime, but it will be useful to enable while running testsuite (or integration tests), to catch more issues.

> Even more interesting usage is to allow ahead-of-time, and thus > unbloated, optimization. There're bunch of JITters and AOTters for > Python language, each of which uses own syntax (via decorators, > etc.) to annotate functions.

As a developer of one of those tools, I've already said that I find it unlikely for the PEP to be useful for that purpose. The issue is that the vocabulary created in the PEP is not extensible enough.

How so, if it essentially allows you make typedefs? Syntax of such typedef follow basic conventions (so any tool should understand its basic semantics), but you're free to assign (implicit, particular for your tool, but of course documented for it) semantics.

Note I'm not saying it's impossible. I'm just skeptical that in its current form it will help us. And apparently none of our "competitors"

That's my biggest fear - that "JIT" Python community is not yet ready to receive and value this feature.

seems very enthusiastic either (feel free to prove me wrong: I might have missed something :-)).

Let me try: MicroPython already uses type annotations for statically typed functions. E.g.

def add(x:int, y:int): return x + y

will translate the function to just 2 machine instructions. And we'd prefer to use standard language syntax, instead of having our own conventions (e.g. above you see that return type "is inferred").

> Having language-specified type annotations > allows for portable syntax for such optimized code.

Only if the annotations allow expressing the subtleties required by the specific optimizer. For example, "Float" is too vague for Numba: we would like to know if that is meant to be a single- or double-precision float.

Oh really, you care to support single-precisions in Numba? We have a lot of problems with sfloats in MicroPython, because whole Python stdlib API was written with implicit assumption that float is (at least) double. E.g., we cannot have time.time() which both returns fractional seconds and is based of Jan 1, 1970 - there's simply not enough bits in single-precision floats! It's on my backlog to bring up this and related issues before wider Python development community.

Anyway, back to your example, it would be done like:

SFloat = float DFloat = float

For a random tool out there, "SFloat" and "DFloat" would be just aliases to floats, but Numba will know they have additional semantics behind them. (That assumes that typedefs like SFloat can be accessed in symbolic form - that's certainly possible if you have your own parser/VM, but might worth to think how to do it on "CPython" level).

> Don't block the language if you're stuck > with an unimaginative implementation, there's much more to Python > than that.

The Python language doesn't really have anything to do with that. It's just an additional library with a set of conventions. Which is also why a PEP wouldn't be required to make it alive, it's just there to make it an official standard.

Yes, and IMHO "JIT/compiler" Python community is too fragmented and to unaware (ignorant?) of other projects' efforts, and having "official standard" would help both "JIT" community and to establish Python as a language with support for efficient compile-time optimizations.

Regards Antoine.

-- Best regards, Paul mailto:pmiscml at gmail.com



More information about the Python-Dev mailing list