(original) (raw)
On Mon, Oct 2, 2017 at 6:42 PM, Raymond Hettinger <raymond.hettinger@gmail.com> wrote:
\> On Oct 2, 2017, at 12:39 AM, Nick Coghlan <ncoghlan@gmail.com> wrote:
\>
\> "What requests uses" can identify a useful set of
\> avoidable imports. A Flask "Hello world" app could likely provide
\> another such sample, as could some example data analysis notebooks).
Right. It is probably worthwhile to identify which parts of the library are typically imported but are not ever used. And likewise, identify a core set of commonly used tools that are going to be almost unavoidable in sufficiently interesting applications (like using requests to access a REST API, running a micro-webframework, or invoking mercurial).
Presumably, if any of this is going to make a difference to end users, we need to see if there is any avoidable work that takes a significant fraction of the total time from invocation through the point where the user first sees meaningful output. That would include loading from nonvolatile storage, executing the various imports, and doing the actual application.
I don't expect to find anything that would help users of Django, Flask, and Bottle since those are typically long-running apps where we value response time more than startup time.
For scripts using the requests module, there will be some fruit because not everything that is imported is used. However, that may not be significant because scripts using requests tend to be I/O bound. In the timings below, 6% of the running time is used to load and run python.exe, another 16% is used to import requests, and the remaining 78% is devoted to the actual task of running a simple REST API query. It would be interesting to see how much of the 16% could be avoided without major alterations to requests, to urllib3, and to the standard library.
It is certainly true that for a CLI tool that actually makes any network I/O, especially SSL, import times will quickly be negligible. It becomes tricky for complex tools, because of error management. For example, a common pattern I have used in the past is to have a high level "catch all exceptions" function that dispatch the CLI command:
try:
main\_function(...)
except ErrorKind1:
....
except requests.exceptions.SSLError:
# gives complete message about options when receiving SSL errors, e.g. invalid certificate
This pattern requires importing requests every time the command is run, even if no network IO is actually done. For complex CLI tools, maybe most command don't use network IO (the tool in question was a complete packages manager), but you pay \~100 ms because of requests import for every command. It is particularly visible because commands latency starts to be felt around 100-150 ms, and while you can do a lot in python in 100-150 ms, you can't do much in 0-50 ms.
David
For mercurial, "hg log" or "hg commit" will likely be instructive about what portion of the imports actually get used. A push or pull will likely be I/O bound so those commands are less informative.
Raymond
\--------- Quick timing for a minimal script using the requests module -----------
$ cat > demo\_github\_rest\_api.py
import requests
info = requests.get('https://api.github.com/users/raymondh'). json()
print('%(name)s works at %(company)s. Contact at %(email)s' % info)
$ time python3.6 demo\_github\_rest\_api.py
Raymond Hettinger works at SauceLabs. Contact at None
real 0m0.561s
user 0m0.134s
sys 0m0.018s
$ time python3.6 -c "import requests"
real 0m0.125s
user 0m0.104s
sys 0m0.014s
$ time python3.6 -c ""
real 0m0.036s
user 0m0.024s
sys 0m0.005s
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: https://mail.python.org/mailman/options/python-dev/ cournape%40gmail.com