Depends if you consider stat calls the overhead vs. the actual disk read/write to load the data. Anyway, this is going to lead down to a discussion/argument over design parameters which I'm not up to having since I'm not actively working on a lazy loader for the stdlib right now.               For those of you not watching -ideas, or ignoring the "Python TIOBE -3%" discussion, this would seem to be relevant to any discussion of reworking the import mechanism:">

(original) (raw)

On 2/9/2012 11:53 AM, Mike Meyer wrote:
On Thu, 9 Feb 2012 14:19:59 -0500  
Brett Cannon <brett@python.org> wrote:  
On Thu, Feb 9, 2012 at 13:43, PJ Eby <pje@telecommunity.com> wrote:  
Again, the goal is fast startup of command-line tools that only use a  
small subset of the overall framework; doing disk access for lazy imports  
goes against that goal.



Depends if you consider stat calls the overhead vs. the actual disk
read/write to load the data. Anyway, this is going to lead down to a
discussion/argument over design parameters which I'm not up to having since
I'm not actively working on a lazy loader for the stdlib right now.



For those of you not watching -ideas, or ignoring the "Python TIOBE
-3%" discussion, this would seem to be relevant to any discussion of
reworking the import mechanism:

http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059801.html

<mike





So what is the implication here? That building a cache of module
locations (cleared when a new module is installed) would be more
effective than optimizing the search for modules on every invocation
of Python?