Issue 984219: hotspot.stats.load is very slow (original) (raw)
This issue has been migrated to GitHub: https://github.com/python/cpython/issues/40494
classification
Title: | hotspot.stats.load is very slow | ||
---|---|---|---|
Type: | enhancement | Stage: | test needed |
Components: | Library (Lib) | Versions: | Python 2.7 |
process
Status: | closed | Resolution: | wont fix |
---|---|---|---|
Dependencies: | Superseder: | ||
Assigned To: | Nosy List: | complex, georg.brandl, pitrou, vdupras | |
Priority: | normal | Keywords: |
Created on 2004-07-02 17:53 by complex, last changed 2022-04-11 14:56 by admin. This issue is now closed.
Messages (7) | ||
---|---|---|
msg21381 - (view) | Author: Viktor Ferenczi (complex) | Date: 2004-07-02 17:53 |
hotspot.Profile generates a very large (10-70Mbytes) file for complex applications. hotspot.stats.load loads the profile file very slow. | ||
msg21382 - (view) | Author: Viktor Ferenczi (complex) | Date: 2004-07-02 18:09 |
Logged In: YES user_id=142612 Python version: 2.3.4 | ||
msg21383 - (view) | Author: Viktor Ferenczi (complex) | Date: 2004-07-02 18:10 |
Logged In: YES user_id=142612 OS: WinXP Prof ENG SP1 CPU: Intel P4 Northwood 1.6G MEM: 512Mbyte, 800MHz RDRAM | ||
msg63917 - (view) | Author: Virgil Dupras (vdupras) ![]() |
Date: 2008-03-18 14:41 |
I had a 54 mb hotshot profile lying around, and it is indeed very long to load, so I ran a profiling session of hotshot.stats.load(MY_BIG_FILE) with python and stdlib of r61515, and here are the results (the resulting prof file is 27 mb): 96541166 function calls in 299.936 CPU seconds Ordered by: cumulative time List reduced from 30 to 15 due to restriction <15> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 299.936 299.936 stats.py:11(load) 1 95.089 95.089 299.936 299.936 stats.py:22(load) 27583167 51.183 0.000 61.815 0.000 log.py:95(next) 13791583 59.314 0.000 59.314 0.000 profile.py:328(trace_dispatch_return) 13791583 35.014 0.000 42.910 0.000 stats.py:54(new_frame) 13791584 40.807 0.000 40.807 0.000 profile.py:295(trace_dispatch_call) 13791583 10.632 0.000 10.632 0.000 log.py:138(_decode_location) 13791583 7.897 0.000 7.897 0.000 stats.py:87(__init__) 1 0.000 0.000 0.000 0.000 pstats.py:73(__init__) 1 0.000 0.000 0.000 0.000 pstats.py:95(init) 1 0.000 0.000 0.000 0.000 pstats.py:138(get_top_level_stats) 1 0.000 0.000 0.000 0.000 log.py:24(__init__) 1 0.000 0.000 0.000 0.000 pstats.py:117(load_stats) 1 0.000 0.000 0.000 0.000 profile.py:436(create_stats) 1 0.000 0.000 0.000 0.000 profile.py:440(snapshot_stats) 96541166 function calls in 299.936 CPU seconds Ordered by: internal time List reduced from 30 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 95.089 95.089 299.936 299.936 stats.py:22(load) 13791583 59.314 0.000 59.314 0.000 profile.py:328(trace_dispatch_return) 27583167 51.183 0.000 61.815 0.000 log.py:95(next) 13791584 40.807 0.000 40.807 0.000 profile.py:295(trace_dispatch_call) 13791583 35.014 0.000 42.910 0.000 stats.py:54(new_frame) 13791583 10.632 0.000 10.632 0.000 log.py:138(_decode_location) 13791583 7.897 0.000 7.897 0.000 stats.py:87(__init__) 1 0.000 0.000 0.000 0.000 log.py:24(__init__) 1 0.000 0.000 0.000 0.000 profile.py:440(snapshot_stats) 1 0.000 0.000 0.000 0.000 pstats.py:138(get_top_level_stats) 30 0.000 0.000 0.000 0.000 pstats.py:484(func_std_string) 5 0.000 0.000 0.000 0.000 posixpath.py:306(normpath) 1 0.000 0.000 299.936 299.936 stats.py:11(load) 24 0.000 0.000 0.000 0.000 stats.py:80(__init__) 1 0.000 0.000 0.000 0.000 pstats.py:117(load_stats) 1 0.000 0.000 0.000 0.000 profile.py:402(simulate_call) 1 0.000 0.000 0.000 0.000 profile.py:118(_get_time_resource) 1 0.000 0.000 0.000 0.000 pstats.py:73(__init__) 1 0.000 0.000 0.000 0.000 pstats.py:95(init) 1 0.000 0.000 0.000 0.000 profile.py:166(__init__) So the bulk of the time seems to be taken by the sheer number of trace_dispatch_return and trace_dispatch_call calls. I took a quick look, but I'm not sure I can do anything to make it faster. If no one else has any idea, I suggest just closing the ticket. | ||
msg64275 - (view) | Author: Georg Brandl (georg.brandl) * ![]() |
Date: 2008-03-21 20:42 |
Turning into a feature request. | ||
msg82042 - (view) | Author: Antoine Pitrou (pitrou) * ![]() |
Date: 2009-02-14 12:57 |
hotshot is more or less deprecated, cProfile being the recommended replacement for high-performance profiling. I suggest closing this issue as won't fix. | ||
msg82089 - (view) | Author: Viktor Ferenczi (complex) | Date: 2009-02-14 16:48 |
I agree. Just close this as "won't fix". |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:56:05 | admin | set | github: 40494 |
2009-02-14 16:53:06 | benjamin.peterson | set | status: open -> closedresolution: wont fix |
2009-02-14 16:48:13 | complex | set | messages: + |
2009-02-14 12:57:27 | pitrou | set | nosy: + pitroumessages: + |
2009-02-14 12:52:50 | ajaksu2 | set | stage: test neededversions: + Python 2.7, - Python 2.3 |
2008-03-21 20:42:27 | georg.brandl | set | type: performance -> enhancementmessages: + nosy: + georg.brandl |
2008-03-18 14:41:32 | vdupras | set | nosy: + vduprasmessages: + |
2008-03-16 21:06:27 | georg.brandl | set | type: performance |
2004-07-02 17:53:53 | complex | create |