[Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation (original) (raw)
Robert Collins robertc at robertcollins.net
Sat Jan 12 06:06:22 CET 2013
- Previous message: [Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation
- Next message: [Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On 12 January 2013 12:41, Guido van Rossum <guido at python.org> wrote:
Here's an interesting puzzle. Check out the core of Tulip's event loop: http://code.google.com/p/tulip/source/browse/tulip/unixevents.py#672 nowready = list(ready) ready.clear() for handler in nowready: call handler
However this implies that we go back to the I/O polling code more frequently. While the I/O polling code sets the timeout to zero when there's anything in the ready queue, so it won't block, it still isn't free; it's an expensive system call that we'd like to put off until we have nothing better to do.
How expensive is it really? If its select, its terrible, but we shouldn't be using that anywhere. if its poll() it is moderately expensive, but still it doesn't scale - its linear with fd's.
If its IO Completion ports in Windows, it is approximately free - the OS calls back into us every time we tell it we're ready for more events. And if its epoll it is also basically free, reading off of an event queue rather than checking every entry in the array. kqueue has similar efficiency, for BSD systems.
I'd want to see some actual numbers before assuming that the call into epoll or completion is actually a driving factor in latency here.
-Rob
- Previous message: [Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation
- Next message: [Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]