msg249406 - (view) |
Author: Marek Otahal (Marek Otahal) |
Date: 2015-08-31 15:47 |
I'd like to use @lru_cache in a library. The problem is I can't know the optimal values for 'maxsize', I need to set them at runtime. I came with 2 possibilities: 1/ set cache's size from a hardcoded argument of the decorated method: @lru_cache def foo_long(self, arg1, ..., **kwds): pass # use foo_long('hi', cacheSize = 1000) This approach allows the users to customize cache size for their problem. 2/ from function's **kwds retrieve a name (string) of an instance member of the class which will hold the cache's size value. This is not as clean as 1/, but offers greated functionality for the use in a library sw: we can compute desired cache size from internal values of the unstance, and the use of cache can be totally transparent to the end user: @lru_cache def foo_long(self, arg1, .. , **kwds) #use foo_long('hi', cacheSizeRef='_cacheSize') What do you think about the proposal? best regards, Mark |
|
|
msg249407 - (view) |
Author: R. David Murray (r.david.murray) *  |
Date: 2015-08-31 15:54 |
How is (1) different from: @lru_cache(maxsize=1000) def foo_long(self, arg1...) As for computing it at runtime: if you need to compute it, you can compute it and *then* define the decorator wrapped function. |
|
|
msg249408 - (view) |
Author: Marek Otahal (Marek Otahal) |
Date: 2015-08-31 16:00 |
Hope this example is not too confusing, it's a patch to my code and lru_cache (backport for python 2.7 from ActiveState) It implements both approaches as highlighted above, and in the test both of them are used (that does not make much sense, normally one would use either of them only) |
|
|
msg249409 - (view) |
Author: Marek Otahal (Marek Otahal) |
Date: 2015-08-31 16:09 |
Hi David, > How is (1) different from: @lru_cache(maxsize=1000) def foo_long(self, arg1...) As I mentioned, for use in a library that is called by end-users. They can call functions and modify params, but do not edit the code. It's up to me (lib devs) to prepare the cache decorator. Ie.: class MyLib(): @lru_cache def foo_long(self, arg1, **kwds): pass #user import MyLib i = MyLib() i.foo_long(1337) > As for computing it at runtime: if you need to compute it, you can compute it and *then* define the decorator wrapped function. ditto as above, at runtime no new decorator definitions should be needed for a library. + a speed penalty, I'd have to wrap a wrapper in 1 or 2 more nested functions, which incures a speed penalty, and we're focusing at cache here. I esp. mention this as I've noticed the ongoing effort to use a C implementation of lru_cache here. |
|
|
msg249410 - (view) |
Author: Marek Otahal (Marek Otahal) |
Date: 2015-08-31 16:15 |
EDIT: > i.foo_long(1337) ofc, this should be: i.foo_long('hi', cacheSize=1337) or for (2): class MyLib(): def __init__(arg1, arg2): self._cacheSize = someComputation(arg1, arg2) # returns a number @lru_cache def foo_long(self, arg1, **kwds): pass #user import MyLib i = MyLib(100, 21) # not to make it so simple: i.changeInternalStateSomehow() # updates arg1, arg2, and also _cacheSize i.foo_long(1337, cacheSizeName='_cacheSize') # ref to self._cacheSize |
|
|
msg249421 - (view) |
Author: R. David Murray (r.david.murray) *  |
Date: 2015-08-31 18:53 |
There is no patch/example attached. It seems like what you really want is an API on lru_cache for updating the cache size. What I'm saying is that the cache size can be passed in on the MyLib call, and the decorator/function constructed as part of MyLib's initialization. |
|
|
msg249447 - (view) |
Author: Raymond Hettinger (rhettinger) *  |
Date: 2015-09-01 02:57 |
> The problem is I can't know the optimal values for 'maxsize', > I need to set them at runtime. The easiest way to go is to wait to start caching until you know the cache size you want: def foo(a, b, c): pass size = get_user_request() foo = lru_cache(foo, maxsize=size) If there is a subsequent need to change the cache size, just rewrap it: size = get_user_request() original_function = foo.__wrapped__ foo = lru_cache(foo, maxsize=size) |
|
|
msg249933 - (view) |
Author: Raymond Hettinger (rhettinger) *  |
Date: 2015-09-05 21:47 |
Sorry, I'm going to reject this one. This option was considered during the design of the LRU cache and not included because it complicated the design, because the use cases were not common (the norm is set and forget), and because the __wrapped__ attribute provides a means of rewrapping or unwrapping as needed (i.e. there are reasonable workarounds). |
|
|