(original) (raw)

\[please restrict follow-ups to python-ideas\]

Let's not get hung up on meta-discussion here - I always thought "massive clusterf\*\*k" was a precise technical term anyway ;-)

While timing certainly needs to be done more carefully, it's obvious to me that this approach \_should\_ pay off significantly when it applies. Comparisons are extraordinarily expensive in Python, precisely because of the maze of test-and-branch code it requires just to figure out which bottom-level comparison function to invoke each time. That's why I spent months of my life (overall) devising a sequence of sorting algorithms for Python that reduced the number of comparisons needed.

Note that when Python's current sort was adopted in Java, they still kept a quicksort variant for "unboxed" builtin types. The adaptive merge sort incurs many overheads that often cost more than they save unless comparisons are in fact very expensive compared to the cost of pointer copying (and in Java comparison of unboxed types is cheap). Indeed, for native numeric types, where comparison is dirt cheap, quicksort generally runs faster than mergesort despite that the former does \_more\_ comparisons (because mergesort does so much more pointer-copying).

I had considered something "like this" for Python 2, but didn't pursue it because comparison was defined between virtually any two types (34 < \[1\], etc), and people were careless about that (both by design and by accident). In Python 3, comparison "blows up" for absurdly mixed types, so specializing for homogeneously-typed lists is a more promising idea on the face of it.

The comparisons needed to determine \_whether\_ a list's objects have a common type is just len(list)-1 C-level pointer comparisons, and so goes fast. So I expect that, when it applies, this would speed even sorting an already-ordered list with at least 2 elements.

For a mixed-type list with at least 2 elements, it will always be pure loss. But (a) I expect such lists are uncommon (and especially uncommon in Python 3); and (b) a one-time scan doing C-level pointer comparisons until finding a mismatched type is bound to be a relatively tiny cost compared to the expense of all the "rich comparisons" that follow.

So +1 from me on pursuing this.

Elliot, please:

- Keep this on python-ideas. python-dev is for current issues in Python development, not for speculating about changes.

- Open an issue on the tracker: https://bugs.python.org/

- At least browse the info for developers: https://docs.python.org/devguide/

- Don't overlook Lib/test/sortperf.py. As is, it should be a good test of what your approach so far \_doesn't\_ help, since it sorts only lists of floats (& I don't think you're special-casing them). If the timing results it reports aren't significantly hurt (and I expect they won't be), then add specialization for floats too and gloat about the speedup :-)

- I expect tuples will also be worth specializing (complex sort keys are often implemented as tuples).

Nice start! :-)