[Python-Dev] [PEP 3148] futures - execute computations asynchronously (original) (raw)
Brett Cannon brett at python.org
Fri Mar 5 21:38:43 CET 2010
- Previous message: [Python-Dev] [PEP 3148] futures - execute computations asynchronously
- Next message: [Python-Dev] [PEP 3148] futures - execute computations asynchronously
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
The PEP says that futures.wait() should only use keyword arguments past its
first positional argument, but the PEP has the function signature as
wait(fs, timeout=None, return_when=ALL_COMPLETED)
. Should it be
wait(fs, *, timeout=None, return_when=ALL_COMPLETED)
?
On Thu, Mar 4, 2010 at 22:03, Brian Quinlan <brian at sweetapp.com> wrote:
Hi all,
I recently submitted a daft PEP for a package designed to make it easier to execute Python functions asynchronously using threads and processes. It lets the user focus on their computational problem without having to build explicit thread/process pools and work queues. The package has been discussed on stdlib-sig but now I'd like this group's feedback. The PEP lives here: http://python.org/dev/peps/pep-3148/ Here are two examples to whet your appetites: """Determine if several numbers are prime.""" import futures import math PRIMES = [ 112272535095293, 112582705942171, 112272535095293, 115280095190773, 115797848077099, 1099726899285419] def isprime(n): if n % 2 == 0: return False sqrtn = int(math.floor(math.sqrt(n))) for i in range(3, sqrtn + 1, 2): if n % i == 0: return False return True # Uses as many CPUs as your machine has. with futures.ProcessPoolExecutor() as executor: for number, isprime in zip(PRIMES, executor.map(isprime, PRIMES)): print('%d is prime: %s' % (number, isprime))
"""Print out the size of the home pages of various new sites (and Fox News).""" import futures import urllib.request URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def loadurl(url, timeout): return urllib.request.urlopen(url, timeout=timeout).read() with futures.ThreadPoolExecutor(maxworkers=5) as executor: # Create a future for each URL load. futuretourl = dict((executor.submit(loadurl, url, 60), url) for url in URLS) # Iterate over the futures in the order that they complete. for future in futures.ascompleted(futuretourl): url = futuretourl[future] if future.exception() is not None: print('%r generated an exception: %s' % (url, future.exception())) else: print('%r page is %d bytes' % (url, len(future.result()))) Cheers, Brian
Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.python.org/pipermail/python-dev/attachments/20100305/1fd342c9/attachment-0001.html>
- Previous message: [Python-Dev] [PEP 3148] futures - execute computations asynchronously
- Next message: [Python-Dev] [PEP 3148] futures - execute computations asynchronously
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]