[Python-Dev] PEP 492: What is the real goal? (original) (raw)
Jim J. Jewett jimjjewett at gmail.com
Thu Apr 30 20:41:55 CEST 2015
- Previous message (by thread): [Python-Dev] PEP 492: What is the real goal?
- Next message (by thread): [Python-Dev] PEP 492: What is the real goal?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Wed Apr 29 20:06:23 CEST 2015,Yury Selivanov replied:
As best I can guess, the difference seems to be that a "normal" generator is using yield primarily to say:
"I'm not done; I have more values when you want them",
but an asynchronous (PEP492) coroutine is primarily saying:
"This might take a while, go ahead and do something else meanwhile."
Correct.
Then I strongly request a more specific name than coroutine.
I would prefer something that refers to cooperative pre-emption, but I haven't thought of anything that is short without leading to other types of confusion.
My least bad idea at the moment would be "self-suspending coroutine" to emphasize that suspending themselves is a crucial feature.
Even "PEP492-coroutine" would be an improvement.
Does it really permit making them [asynchronous calls], or does it just signal that you will be waiting for them to finish processing anyhow, and it doesn't need to be a busy-wait?
I does.
Bad phrasing on my part. Is there anything that prevents an asynchronous call (or waiting for one) without the "async with"?
If so, I'm missing something important. Either way, I would prefer different wording in the PEP.
It uses the
yield from
implementation with an extra step of validating its argument.await
only accepts an awaitable, which can be one of:
What justifies this limitation?
We want to avoid people passing regular generators and random objects to 'await', because it is a bug.
Why?
Is it a bug just because you defined it that way?
Is it a bug because the "await" makes timing claims that an object not making such a promise probably won't meet? (In other words, a marker interface.)
Is it likely to be a symptom of something that wasn't converted correctly, and there are likely to be other bugs caused by that same lack of conversion?
For coroutines in PEP 492:
await_ = anext is the same as call = _next await_ = aiter is the same as call = _iter
That tells me that it will be OK sometimes, but will usually be either a mistake or an API problem -- and it explains why.
Please put those 3 lines in the PEP.
This is OK. The point is that you can use 'await log' in aenter. If you don't need awaits in aenter you can use them in aexit. If you don't need them there too, then just define a regular context manager.
Is it an error to use "async with" on a regular context manager? If so, why? If it is just that doing so could be misleading, then what about "async with mgr1, mgr2, mgr3" -- is it enough that one of the three might suspend itself?
class AsyncContextManager: def aenter(self): log('entering context')
aenter must return an awaitable
Why? Is there a fundamental reason, or it is just to avoid the hassle of figuring out whether or not the returned object is a future that might still need awaiting?
Is there an assumption that the scheduler will let the thing-being awaited run immediately, but look for other tasks when it returns, and a further assumption that something which finishes the whole task would be too slow to run right away?
It doesn't make any sense in using 'async with' outside of a coroutine. The interpeter won't know what to do with them: you need an event loop for that.
So does the PEP also provide some way of ensuring that there is an event loop? Does it assume that self-suspending coroutines will only ever be called by an already-running event loop compatible with asyncio.get_event_loop()? If so, please make these contextual assumptions explicit near the beginning of the PEP.
It is a
TypeError
to pass a regular iterable without_aiter_
method toasync for
. It is aSyntaxError
to useasync for
outside of a coroutine.
The same questions about why -- what is the harm?
I can imagine that as an implementation detail, the async for wouldn't be taken advtange of unless it was running under an event loop that knew to look for "aync for" as suspension points.
I'm not seeing what the actual harm is in either not happening to suspend (less efficient, but still correct), or in suspending between every step of a regular iterator (because, why not?)
For debugging this kind of mistakes there is a special debug mode in asyncio, in which
@coroutine
... decorator makes the decision of whether to wrap or not to wrap based on an OS environment variablePYTHONASYNCIODEBUG
.
(1) How does this differ from the existing asynchio.coroutine? (2) Why does it need to have an environment variable? (Sadly, the answer may be "backwards compatibility", if you're really just specifying the existing asynchio interface better.) (3) Why does it need [set]get_coroutine_wrapper, instead of just setting the asynchio.coroutines.coroutine attribute? (4) Why do the get/set need to be in sys?
Is the intent to do anything more than preface execution with:
import asynchio.coroutines asynchio.coroutines._DEBUG = True
-jJ
--
If there are still threading problems with my replies, please email me with details, so that I can try to resolve them. -jJ
- Previous message (by thread): [Python-Dev] PEP 492: What is the real goal?
- Next message (by thread): [Python-Dev] PEP 492: What is the real goal?
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]