msg69758 - (view) |
Author: Darryl Dixon (esrever_otua) |
Date: 2008-07-16 04:01 |
The system recursion limit seems to be wildly different in its behaviour on 2.6/trunk versus, for example, 2.5 or 2.4, EG: On Python 2.4: Python 2.4.3 (#1, Dec 11 2006, 11:38:52) [GCC 4.1.1 20061130 (Red Hat 4.1.1-43)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class rec(object): ... child = None ... def __init__(self, counter): ... if counter > 0: ... self.child = rec(counter-1) ... >>> mychain = rec(998) >>> On Python 2.6/trunk: Python 2.6b1+ (trunk:64998, Jul 16 2008, 15:50:22) [GCC 4.1.1 20070105 (Red Hat 4.1.1-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class rec(object): ... child = None ... def __init__(self, counter): ... if counter > 0: ... self.child = rec(counter-1) ... >>> mychain = rec(249) Traceback (most recent call last): File "", line 1, in File "", line 5, in __init__ [...snip...] File "", line 5, in __init__ RuntimeError: maximum recursion depth exceeded >>> In both cases sys.getrecursionlimit() shows 1000. Is this behaviour intentional? It looks a lot like a regression of some sort. It appears to be effectively 4x shorter when creating the nested object graph. |
|
|
msg69760 - (view) |
Author: Brett Cannon (brett.cannon) *  |
Date: 2008-07-16 04:12 |
A lot of crashers were fixed for 2.6 where the recursion limit was not being used at all. That is probably what you are seeing. |
|
|
msg69762 - (view) |
Author: Darryl Dixon (esrever_otua) |
Date: 2008-07-16 04:18 |
Hmmm, I'm not certain I agree; on 2.4/2.5 doing rec(999) hits the recursion limit, as expected (makes sense that there would be an item or two on the stack prior to the immediate call to rec() ). This looks more like the interpreter is adding 4x the number of items to the stack during the construction of the nested object, which seems pretty surprising/broken... |
|
|
msg69766 - (view) |
Author: Brett Cannon (brett.cannon) *  |
Date: 2008-07-16 05:28 |
Well, probably the best way to find out would be to run under gdb and see who is incrementing the recursion count. |
|
|
msg69772 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2008-07-16 11:20 |
Why was 1000 chosen in the first place? If it's just an arbitrary value then we can bump it to 4000 so that people don't get bad surprises when upgrading their Python. > This looks more > like the interpreter is adding 4x the number of items to the stack > during the construction of the nested object, which seems pretty > surprising/broken... Well PyObject_Call increases the recursion count, and entering __init__ will increase it once more. That explains the 2x, not the 4x though. |
|
|
msg69810 - (view) |
Author: Brett Cannon (brett.cannon) *  |
Date: 2008-07-16 18:25 |
On Wed, Jul 16, 2008 at 4:20 AM, Antoine Pitrou <report@bugs.python.org> wrote: > > Antoine Pitrou <pitrou@free.fr> added the comment: > > Why was 1000 chosen in the first place? If it's just an arbitrary value > then we can bump it to 4000 so that people don't get bad surprises when > upgrading their Python. > It was originally 10,000, but people wanted thread switches to occur more often. >> This looks more >> like the interpreter is adding 4x the number of items to the stack >> during the construction of the nested object, which seems pretty >> surprising/broken... > > Well PyObject_Call increases the recursion count, and entering __init__ > will increase it once more. That explains the 2x, not the 4x though. As I said, without a comparison of traces this is continue to just be speculation (and I don't have the time to do that). |
|
|
msg69812 - (view) |
Author: Benjamin Peterson (benjamin.peterson) *  |
Date: 2008-07-16 18:28 |
Brett: >It was originally 10,000, but people wanted thread switches to occur >more often. I thought that was managed by sys.setcheckinterval. |
|
|
msg69813 - (view) |
Author: Brett Cannon (brett.cannon) *  |
Date: 2008-07-16 18:30 |
On Wed, Jul 16, 2008 at 11:28 AM, Benjamin Peterson <report@bugs.python.org> wrote: > > Benjamin Peterson <musiccomposition@gmail.com> added the comment: > > Brett: >>It was originally 10,000, but people wanted thread switches to occur >>more often. > > I thought that was managed by sys.setcheckinterval. > Yes it is; sorry, brain is slow today. I know the current value usually does not lead to a segfault on any of the common platforms that Python runs on. |
|
|
msg69859 - (view) |
Author: Trent Nelson (trent) *  |
Date: 2008-07-16 23:58 |
Darryl, was your trunk version built as debug? If so, can you try without? |
|
|
msg69863 - (view) |
Author: Darryl Dixon (esrever_otua) |
Date: 2008-07-17 01:18 |
Hi Trent, No, my build did not invoke --with-pydebug. In other words, the process I used was simply: svn co http://svn.python.org/projects/python/trunk python-trunk cd python-trunk ./configure --prefix=/home/dixond/throwaway make make install |
|
|
msg69903 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2008-07-17 20:06 |
Here is a small script that shows various possibilities depending on how object creation is done, and here is the output with the trunk: rec1 stopped at 1000 rec2 stopped at 1000 rec3 stopped at 500 rec4 stopped at 334 rec5 stopped at 334 rec6 stopped at 250 With 2.5, the output is: rec1 stopped at 1000 rec2 stopped at 1000 rec3 stopped at 500 rec4 stopped at 1000 rec5 stopped at 1000 rec6 stopped at 1000 I think we should just acknowledge that recursion count has gotten stricter (PyObject_Call() increases it, and then PyEval_EvalFrameEx() will increase it a second time if Python code is entered), and bump the default recursion limit. (the reason calling Python functions directly doesn't increase the recursion count twice is that there are optimization shortcuts in ceval.c to avoid calling PyObject_Call() - not the case though when calling a type object) |
|
|
msg71601 - (view) |
Author: Benjamin Peterson (benjamin.peterson) *  |
Date: 2008-08-21 02:49 |
Guido, can you comment? |
|
|
msg71636 - (view) |
Author: Antoine Pitrou (pitrou) *  |
Date: 2008-08-21 13:27 |
(FWIW, I just ran Misc/find_recursion_limit.py on a 32-bit Windows box. With 2.6 I get 5900. With 3.0 I get 9000. So the good news is that 3.0 seems to be less stack-hungry :-)) |
|
|
msg71653 - (view) |
Author: Guido van Rossum (gvanrossum) *  |
Date: 2008-08-21 16:10 |
I think it's fine as it is. Incrementing the stack level more frequently is a good thing since there used to be paths that didn't increment it at all and hence could cause segfaults. The default is conservative since increasing it could cause segfaults, and on some systems threads don't get a very large stack. |
|
|