Statically typed variables enforcement (original) (raw)
.. PEP: 9999
.. Title: Optional Runtime Enforcement of Type Annotations
.. Author: Karlos karlos.santana13@gmail.com
.. Status: Draft
.. Type: Standards Track
.. Content-Type: text/x-rst
.. Created: 2024-05-26
Abstract
This PEP proposes adding an optional mechanism to enforce type annotations
at runtime in Python, enabling developers to opt-in to stricter typing behavior
when desired.
Motivation
Python’s type hints (PEP 484 and related) are currently used for static
analysis and documentation, but they are not enforced at runtime. This leads to
the following issues:
- Developers expect type hints to be respected, but Python still accepts values of any type.
- Bugs caused by incorrect types (e.g., passing a
bool
whereint
is expected) are silently ignored. - Existing runtime type checkers (like
typeguard
andbeartype
) are external and inconsistent.
This PEP aims to make type annotations optionally enforceable by the interpreter or via a standard decorator.
Specification
We propose two mechanisms for enabling runtime type enforcement:
- Decorator-based Enforcement
Introduce a new decorator in the standard library:
.. code-block:: python
from typing_enforce import enforce_types
@enforce_types
def greet(name: str, times: int) -> None:
print(("Hello, " + name) * times)
greet("World", 3) # OK
greet("World", "three") # TypeError
- Interpreter Flag (Optional)
Add a new interpreter flag:
.. code-block:: bash
python -X enforce-types script.py
When enabled, all function calls with annotations will check argument types and return values
against their annotations. Violations will raise TypeError
.
Backwards Compatibility
This feature is opt-in. Existing codebases will not be affected unless they explicitly enable it.
Reference Implementation
Prototype decorators already exist in third-party libraries:
typeguard
beartype
This PEP proposes adapting a simplified version of such behavior for inclusion in the standard library.
Rejected Ideas
- Mandatory enforcement: Too disruptive; not Pythonic.
- Postponed evaluation only: Doesn’t address runtime safety.
Copyright
This document has been placed in the public domain.
nedbat (Ned Batchelder) May 26, 2025, 5:04pm 2
Karlos, thanks for your interest in advancing Python. Any time an addition to the standard library is suggested, the first question has to be: why put it in the standard library? You’ve linked to two third-party packages that do what you want. Why not use those?
Run-time typechecking is not something the standard library will use itself (because of the time overhead), so there’s no clear need for the ability to be added to the stdlib. It’s a simple matter to use third-party dependencies, and getting easier all the time.
Have you use the libraries you linked to? What is your impression of them?
sharktide (Rihaan Meher)
May 26, 2025, 6:54pm 3
I implemented this in
Even though the repository is off topic, there is a strict types decorator in there:
import inspect
from functools import wraps
from typing import get_type_hints
def strict_types(func):
sig = inspect.signature(func)
type_hints = get_type_hints(func)
@wraps(func)
def wrapper(*args, **kwargs):
bound_args = sig.bind(*args, **kwargs)
bound_args.apply_defaults()
# Check argument types
for name, value in bound_args.arguments.items():
expected_type = type_hints.get(name)
if expected_type and not isinstance(value, expected_type):
raise TypeError(
f"Argument '{name}' expected {
expected_type.__name__}, got {
type(value).__name__}")
# Call the function
result = func(*args, **kwargs)
# Check return type
expected_return = type_hints.get('return')
if expected_return and not isinstance(result, expected_return):
raise TypeError(
f"Return value expected {
expected_return.__name__}, got {
type(result).__name__}")
return result
return wrapper
Maybe this isn’t such as bad idea… A lot of poeple would like it, but I have to agree with @nedbat that if it were this necessary to a project, someone could surely install a library.
csm10495 (Charles Machalow) May 26, 2025, 8:45pm 4
I find that sometimes type hints are a bit ‘wrong’. Something may be hinted as a list[str] but if I pass a tuple[str,…] It works fine. Maybe it wants a dict but I pass a defaultdict. If it looks like a duck; it’s close enough to being a duck.
It’s really hard to have perfect type hints that are 100% encompassing of all acceptable cases.
A lot of the time the type hints that are 100% encompassing are hard to read too.
With that in mind I think the 3rd party libraries are probably reasonable enough to accomplish this, but don’t know if it would fit in the stdlib.
da-woods (Da Woods) May 26, 2025, 9:10pm 5
I think the interpreter flag to enforce typing would cause a lot of unhappiness.
It’d be unusable unless all of your dependencies were perfectly typed. Any library that didn’t meet this would most likely be met with a lot of bug reports, which would be a bit unfair given that typing has always been presented as optional and unchecked.
I think if you were going to do this, it would have to be on an opt-in, per-module basis.
oscarbenjamin (Oscar Benjamin) May 26, 2025, 9:27pm 6
Python’s static types cannot be checked at runtime:
def func(arg: list[int]):
arg.append(1)
stuff: list[str] = []
func(stuff)
There is no way for the interpreter to check this because the annotation for stuff
is not attached to the list object.
Runtime typing is generally a misguided concept except in limited situations. Third party packages can do some useful things but the runtime aspect of typing has not been designed in a way that it could actually be integrated as part of the core language.
Resolving the types and determining the subtype relation is probably NP-complete or something so even if objects had static types attached there is no way you would want to integrate that kind of checking into the runtime.
If you want a decorator that does some limited type checking then this is something that belongs in a third party package.
Karlos (kpluta) May 27, 2025, 2:08pm 7
Great points — thanks for raising them.
You’re absolutely right that the bar for inclusion in the standard library is high, and the presence of mature third-party alternatives usually makes it unnecessary. In this case, my curiosity comes partly from an educational and ecosystem perspective rather than a strict need for new functionality in stdlib
.
Why even consider stdlib
inclusion?
The motivation wasn’t just “this would be handy,” but rather:
- Education and consistency: Many beginners assume
type hints
do something at runtime. Having an official, minimal utility to bridge that gap could reduce confusion — similar to howdataclasses
clarified a formerly common boilerplate pattern. - Ubiquity and trust: Some domains (e.g., finance, government) resist non-stdlib packages even if they’re solid. A minimal
check_type()
might help in those contexts. - Ergonomics and discoverability: A lot of devs don’t know
beartype
ortypeguard
, and writing manual checks withisinstance()
/__annotations__
is repetitive.
That said — you’re right: performance and low-level adoption are valid concerns. It’s not something the stdlib would use itself, which weakens the argument.
On those third-party packages:
Yes, I’ve tried both:
typeguard
is solid, though sometimes overkill for simple checks. It patches functions at runtime, which can feel a bit intrusive.beartype
is impressively clever (and fast) — but its magic feels… un-Pythonic at times, especially when debugging errors.
Both are great — I’m not arguing they should be replaced, just wondering aloud if there’s a minimal, no-magic check_type(x, T)
that could be included for convenience. Like a typing.validate_type(x, T)
— not enforced by the interpreter, but helpful in userland.
But I take your point. Unless there’s a strong case where the standard library itself would benefit — or the user experience would be radically improved — it probably belongs outside the core.
If you were to suggest anything here as a PEP, would you lean toward:
- a micro-utility (like
typing.check_type
) - documentation improvements to guide users to
typeguard
/beartype
- or just leave it to the ecosystem entirely?
Thanks again for the thoughtful response!
MegaIng (Cornelius Krupp) May 27, 2025, 2:21pm 8
The issue is that “correctly” implementing such a function at runtime
- is impossible (like, literally. See message by @oscarbenjamin)
- is complex. The complexity of this single function would rival entire stdlib modules.
- is slow, unless you start being clever - and being clever invites an infinite stream of bugs.
- high maintenance since every new typing feature needs to be supported. (and new typing features are also release semi-independent from python versions via
typing_extensions
- meaning that package would also need to contain an implementation.)
typeguard
and beartype
can “get away” with some of these issues because they are not in the stdlib. They are not expected to fullfill all niches - a stdlib function would be expected to do that.
Karlos (kpluta) May 27, 2025, 2:31pm 9
Hi!
Thanks for the discussion — I have two main points I’d love some clarity or thoughts on:
1. Usage of strict_types()
I’m trying to experiment with runtime type checking and used the following code:
python
def plus(a: int = 2, b: int = 2):
return str(a) + str(b)
strict_types(plus(2, "j"))
This gives me:
bash
TypeError: '2j' is not a callable object
It seems like strict_types(...)
is being passed the result of plus(...)
instead of the function itself. So my guess is that the correct usage is either:
python
@strict_types
def plus(a: int, b: int):
return str(a) + str(b)
plus(2, "j")
or
python
def plus(a: int, b: int):
return str(a) + str(b)
strict_plus = strict_types(plus)
strict_plus(2, "j")
Can someone confirm this?
2. Why annotate at all, if it’s not enforced?
I mostly agree with the perspective that additional runtime checking tools should remain in external packages to keep the stdlib lightweight. That said — it does raise a philosophical question:
If type annotations are not enforced by the runtime, aren’t they just fancy docstrings?
Wouldn’t it make more sense to put types in docstrings, where they might even be more readable and flexible?
I get the value of tooling like mypy
, pyright
, etc., and I’m not dismissing that — but it feels like Python is caught halfway between two typing worlds: static-but-not-enforced and dynamic-but-not-safe.
Thanks again for the thoughtful comments — I’m learning a lot from reading through these threads!
Karlos
nedbat (Ned Batchelder) May 27, 2025, 2:53pm 10
You say you get the value, but you then dismiss it. Moving types into docstrings “where they can be more flexible” (presumably because you can use English to describe the types) would put them out of reach of static type checkers like mypy and pyright. The value of annotations is they can be checked by tools like mypy.
You are right: the Python runtime does not enforce the type annotations. You can opt-in to enforcement by running a type checker like pyright and attending to its warnings.
dg-pb (dg-pb) May 27, 2025, 3:07pm 11
Completely agree with @MegaIng
I have written a full-featured run-time checker and went through all the complexities and optimizations.
The thing is - if you want something basic - there is already beartype
. It does achieve good performance and does what it says well.
If you want something more advanced, something like that would be premature at the very least. Without certain tools (that standard library does not currently have) you will not be able to achieve:
- Ability to describe any object. Such would require close alignment with
typing
and it’s favourable evolution. Or if not, then there needs to be a divergence from it. And such divergent toolkit is a project on its own… - Performance.
- to actually have something that is not only appropriate for the highest level code complexity is non-trivial.
- Also, to make it fully optimized, such needs a multi-set infrastructure and symbolic bool logic solver - think
sympy
.
Don’t get me wrong, I would be very pleased if standard library evolved in these areas to such degree.
But before then, such is simply not worth it, given result with current toolkit would be sub-par.
Maybe one day it will be (think 20 years) or maybe such things are simply out of scope forever and destined to be 3rd party utilities.
P.S. Big bonus of such is that once such thing exists, multiple dispatching is pretty much for free being able to take advantage of full features of it. This is pretty much what plumdispatch
did with beartype
.
peterc (peter) May 27, 2025, 3:21pm 12
mypy enforces type hints to be correct when the code is added to the git code base of a project, if you chose to set up a project that way. That is better than enforcing types at run time, or even at compile time. It means that if you clone a git repo set up this way, you’re guaranteed that so long as you obey the typing contract, the typing contract will be satisfied. In contrast, run time type enforcement can only ever guarantee that if the typing contact is broken, (even if it isn’t the fault of the user) the program will crash.
Python typing does have problems. [1]
And as you yourself just deduced, decorators need to be used as decorators to satisfy their usage contract, otherwise you’ll get unexpected results
- It doesn’t seem to be designed to describe all cases. And even as more things keep being patched on, I doubt it will ever be as robust as types systems that have been designed that way from the ground up, like Rust’s. It would be nice if Python’s typing system was reworked with the transition to Python4. And if that could happen during my lifetime
↩︎
dg-pb (dg-pb) May 27, 2025, 3:23pm 13
Yup, your “hidden text” was also the conclusion that I arrived at when I was looking into this. It feels that it was driven by spontaneously arising needs as opposed to pre-meditated vision.
pf_moore (Paul Moore) May 27, 2025, 3:47pm 14
It was driven by an extremely deliberate and conscious decision that typing was to be optional, and that untyped and typed code could live together in the same codebase (in order to allow untyped libraries to be used by typed code).
The need for gradual types and the complexities they introduce are at the root of most of Python’s typing “problems”. I put quotes around the word because it’s not clear how you could “fix” them without abandoning the vision of optional, gradual typing. (In fact, the typing community is doing an excellent job in that area, but it’s a genuinely hard task).
Karlos (kpluta) May 27, 2025, 5:17pm 15
actually, they can!
python
import inspect
from typing import get_type_hints
def enforce_types(func):
sig = inspect.signature(func)
type_hints = get_type_hints(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
# Check Arguments
for name, value in bound.arguments.items():
if name in type_hints:
expected = type_hints[name]
if not isinstance(value, expected):
raise TypeError(f"Argument '{name}' = {value!r} is not of type {expected}")
# Original function
result = func(*args, **kwargs)
# Check the returned value
if 'return' in type_hints:
expected = type_hints['return']
if not isinstance(result, expected):
raise TypeError(f"Return value {result!r} is not of type {expected}")
return result
return wrapper
so, use it with @enforce_types and everything (actually, only the basic types, things like: Union
, Optional
, Literal
, Any
, or List[int]
etc. from typing
won’t work)
Thanks for feedback,
Karlos.
Karlos (kpluta) May 27, 2025, 5:22pm 16
Hi Ned — I really appreciate your response!
You’re absolutely right — static type checkers are the primary consumers of annotations, and I completely agree they’re powerful and essential in larger codebases.
My comment about “docstrings” was more of a provocation than a proposal — I just wanted to highlight the confusion that newcomers often experience when they see something like
def foo(x: int)
and assume it’s enforced by the interpreter.
What I’m exploring is whether there’s space in the standard library — even in an optional or auxiliary module — for minimal runtime enforcement, for those who explicitly want it.
Just like dataclasses
brought structured data handling without needing external tools like attrs
, perhaps something like typingtools.check_type()
or @enforce_types
could help bridge that mental gap between “type hints” and “actual behavior.”
Tools like beartype
are fantastic — but they’re often invisible to beginners, and a bit too magical under the hood. Having something official (even minimal) in the typing
namespace might offer more clarity and guidance, even if it’s not adopted by the runtime itself.
Thanks again for the thoughtful discussion — I was happy to respond!
— Karlos
MegaIng (Cornelius Krupp) May 27, 2025, 5:26pm 17
So the example oscar gave you fails with this type checker? It should after all.
nedbat (Ned Batchelder) May 27, 2025, 5:49pm 18
Yes, it is surprising to newcomers that the annotations don’t do anything at runtime. Part of that confusion is because type declarations in other languages can block you from running your program, so they seem to do something at runtime, but they don’t actually. Pre-run static type checking in Python is very much like static type checking in Java (for example), except that Java won’t let you run if the type checking fails, whereas Python will let you run.
Mostly you don’t want runtime type checking. We shouldn’t add a tool to the stdlib that does something most people don’t want, especially where it gets complicated, where there are sharp edge cases, and where the rest of the standard library won’t make use of the tool.
I think we should focus on educating beginners on how type annotations work, and teach them the value of pre-run static type checking.
nemocpp (NemoCur) May 27, 2025, 6:42pm 19
Hi everyone, I personally don’t use static typing but this is what came to my mind:
We could have an abstract TypeEnforcer class with a logging-like syntax (root TypeEnforcer or per-file TypeEnforcer) in an stdlib module that the 3rd party libraries can incorporate.
Then, using that same module, we’d have an official way of enforcing (or disabling) type checking, assuming you have set up a valid TypeEnforcer.
Out of the scope:
Having it be official could imply that in the future it could also be used to make bytecode speciallization a bit more aggressive, if it proves to be feasible
sharktide (Rihaan Meher) May 27, 2025, 10:36pm 20
import inspect
from typing import get_type_hints
def enforce_types(func):
sig = inspect.signature(func)
type_hints = get_type_hints(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
# Check Arguments
for name, value in bound.arguments.items():
if name in type_hints:
expected = type_hints[name]
if not isinstance(value, expected):
raise TypeError(f"Argument '{name}' = {value!r} is not of type {expected}")
# Original function
result = func(*args, **kwargs)
# Check the returned value
if 'return' in type_hints:
expected = type_hints['return']
if not isinstance(result, expected):
raise TypeError(f"Return value {result!r} is not of type {expected}")
return result
return wrapper
Isn’t this just a version of the one I shared?