GitHub - aio-libs/async-lru: Simple LRU cache for asyncio (original) (raw)
async-lru
info: | Simple lru cache for asyncio |
---|
Installation
Usage
This package is a port of Python's built-in functools.lru_cache function for asyncio. To better handle async behaviour, it also ensures multiple concurrent calls will only result in 1 call to the wrapped function, with all await
s receiving the result of that call when it completes.
import asyncio
import aiohttp from async_lru import alru_cache
@alru_cache(maxsize=32) async def get_pep(num): resource = 'http://www.python.org/dev/peps/pep-%04d/' % num async with aiohttp.ClientSession() as session: try: async with session.get(resource) as s: return await s.read() except aiohttp.ClientError: return 'Not Found'
async def main(): for n in 8, 290, 308, 320, 8, 218, 320, 279, 289, 320, 9991: pep = await get_pep(n) print(n, len(pep))
print(get_pep.cache_info())
# CacheInfo(hits=3, misses=8, maxsize=32, currsize=8)
# closing is optional, but highly recommended
await get_pep.cache_close()
asyncio.run(main())
TTL (time-to-live in seconds, expiration on timeout) is supported by accepting ttl configuration parameter (off by default):
@alru_cache(ttl=5) async def func(arg): return arg * 2
The library supports explicit invalidation for specific function call by cache_invalidate():
@alru_cache(ttl=5) async def func(arg1, arg2): return arg1 + arg2
func.cache_invalidate(1, arg2=2)
The method returns True if corresponding arguments set was cached already, False otherwise.
Thanks
The library was donated by Ocean S.A.
Thanks to the company for contribution.