Skip find_existing_run call if head and tail pairs sorted differently by AngelicosPhosphoros · Pull Request #143495 · rust-lang/rust (original) (raw)

@AngelicosPhosphoros thanks for tagging me, here is a review of the idea rather than the code itself, which is fine other than maybe the h0 etc. variable names.

I can totally understand the motivation for this change. Why waste N-2 comparisons if it can be avoided by doing 2-4 additional comparisons? At first glance it's a nice algorithmic improvement with no downsides, but upon closer inspection it leaves me with mixed feelings. Before going further let's look at some real performance figures, since the motivation for this change is rooted in improving performance. All tests were performed with rustc 1.90.0-nightly (28f1c8079 2025-06-24) on my main Zen 3 machine. random is the full random pattern and random_s95 is 95% sorted followed by 5% unsorted, simulates append + sort as described here. random_snl_x is a derivate of random_s where everything but the last x elements is sorted. I've also include slice::sort for reasons that will become apparent shortly.

image

image

As we can see there is a small but noticeable improvement for random_snl_1, which simulates the case that exactly one element is appended to an already sorted vector. The effect is of similar strength for both u64 and String despite String being more expensive to compare in general, but other constants seem to outweigh this case. random_snl_2 already runs into a 50% chance that the new heuristic will fail to detect that a full scan will be futile. This is enough to nullify the improvement in practice. From this we can hypothesize that this improvement will only be meaningful for the very specific case that exactly one element was added to an already sorted input and then slice::sort_unstable called.

Zooming out, there is a larger issue. Essentially this is trying to optimize a known and documented performance sub-optimality in a way that only works for a very narrow use-case. The documentation for slice::sort_unstable currently contains the following:

It is typically faster than stable sorting, except in a few special cases, e.g., when the slice is partially sorted.

If users can predict this use-case they are much better served with slice::sort which gracefully and efficiently handles any kind of pre-sorted sub-segments as seen in the benchmark results.

With all this combined I'm not convinced that this change - which represents a small but non-zero increase in code complexity - should be merged. It's a non-ideal situation that the generally faster slice::sort_unstable loses out to slice::sort in the quite common sort + append workload, especially for users with code structured in a way that makes it hard to prefer one over the other. There are some more robust approaches that could potentially improve this situation, namely bidirectional initial scanning or even better some from of in-place rotation based merging. @orlp and I initially decided against pursuing these ideas to keep binary-size and compile-times in check, but it certainly doesn't seem impossible to achieve even with a tight budget.