(original) (raw)
I'd like to move forward with this. Since there are no objections to the approach suggested by James, I've tentatively sent the corresponding v5 implementation (https://reviews.llvm.org/D56593) for review.
On Fri, Jan 11, 2019 at 1:45 PM Clement Courbet <courbet@google.com> wrote:
On Thu, Jan 10, 2019 at 4:47 PM Clement Courbet <courbet@google.com> wrote:On Wed, Jan 9, 2019 at 6:16 PM James Y Knight <jyknight@google.com> wrote:On Tue, Jan 8, 2019 at 9:24 AM Clement Courbet <courbet@google.com> wrote:I'm afraid about the "almost" and "generally": what about users who don't ?Even so, it should be fine to enable it for those platforms which do include it.I do note, sadly, that currently out of all these implementations, only NetBSD and FreeBSD seem to actually define a separate more optimized bcmp function. That does mean that this optimization would be effectively a no-op, for the vast majority of people.This might or might not be considered really an issue.Right, the issue is adding an effectively useless optimization in llvm.- In my original proposal, people have to explicitly opt-in to the feature and link to their memcmp implementation, they do not get the improvement automatically.- In this proposal, they have to patch their libc, which might be slightly more painful depending on the system.Users may also include a function named bcmp in their binary, which will overrides the one from libc.Here's a patch with this proposal to see what this looks like: https://reviews.llvm.org/D56436It feels like this optimization would be better done in llvm/lib/Transforms/Utils/SimplifyLibCalls.cpp,I'll have a look at this approach.You're right, that's a better place to do this indeed: https://reviews.llvm.org/D56593But if you can show a similar performance win in public code, it'd be great to attempt to push a more optimized version upstream at least to glibc. Some more precise numbers than "very large improvement" are probably necessary to show it's actually worth it. :)We were planning to contribute it to compiler-rt. Contributing a deprecated function to the libc sounds.... weird.Yes, contributing an optimization for a deprecated function is indeed weird. Thus the importance of reliable performance numbers justifying the importance -- I'd never have thought that the performance cost of returning an ordering from memcmp would be important, and I suspect nobody else did.Fair enough, let me give some numbers for this change.Before that, a caveat with any benchmarks for comparing strings is that the results depend a lot the distribution of sizes and content of these strings. So it makes more sense to benchmark an actual application, and we have our own custom benchmarks.That being said, one of the cases where we have found this optimization to be impactful is \`operator==(const string&, const string&)\`. libcxx has a family of benchmarks for \`BM\_StringRelational\_Eq\`), which I'm going to use here.BM\_StringRelational\_Eq benchmarks comparison of strings of size 7 (Small), 63 (Large) and 5k (Huge) characters, in four scenarios (scenarii ?):- The equal case (Control), which is theoretically the worst case as you have to prove that all bytes are equal.- The case when strings differ. In that case bcmp() only needs to prove that one byte differs to return nonzero. Typical cases where strings differ are at the start of the string (ChangeFirst), but also, interestingly, at the end (ChangeLast, when you are comparing strings with a common prefix, which happens frequently e.g. when comparing subsequent elements of a sorted list of strings). Another interesting case is the case when the change position is in the middle (ChangeMiddle).For this comparison, I'm using as base the call to \`memcmp\`, and as experiment the following crude bcmp() implementation (I'm assuming X86\_64), that shows how we can take advantage of the observations above to optimize typical cases.\`\`\`#define UNALIGNED\_LOAD64(\_p) (\*reinterpret\_cast(\_p))extern "C" int bcmp(const void\* p1, const void\* p2, size\_t n) throw() {const char\* a = reinterpret\_cast(p1);const char\* b = reinterpret\_cast(p2);if (n >= 8) {uint64\_t u = UNALIGNED\_LOAD64(a) ^ UNALIGNED\_LOAD64(b);uint64\_t v = UNALIGNED\_LOAD64(a + n - 8) ^ UNALIGNED\_LOAD64(b + n - 8);if ((u | v) != 0) {return 1;}}return memcmp(a, b, n);}\`\`\`Note that:- there is a bit of noise in the results, but you'll see that this quite dumb bcmp() reasonably improves {Large,Huge}\_{ChangeFirst,ChangeLast} (note the the improvement to {Large,Huge}\_ChangeLast cannot be achieved with the semantics of memcmp) without hurting the \`ChangeMiddle\` and \`Control\` cases.- the small string case (size==7) is not modified by our change because there is a "fast path" for very small sizes on operator==.- We are still experimenting with the final bcmp() implementation (in particular improving the \`Control\` and \`ChangeMiddle\` cases by improving parallelism). Our current version is better than memcmp() across the board on X86.base (ns) exp (ns)BM\_StringRelational\_Eq\_Empty\_Empty\_Control 1.65 1.7BM\_StringRelational\_Eq\_Empty\_Small\_Control 1.37 1.4BM\_StringRelational\_Eq\_Empty\_Large\_Control 1.37 1.44BM\_StringRelational\_Eq\_Empty\_Huge\_Control 1.38 1.44BM\_StringRelational\_Eq\_Small\_Small\_Control 6.53 6.51BM\_StringRelational\_Eq\_Small\_Small\_ChangeFirst 1.96 1.94BM\_StringRelational\_Eq\_Small\_Small\_ChangeMiddle 5.06 4.95BM\_StringRelational\_Eq\_Small\_Small\_ChangeLast 6.77 6.84BM\_StringRelational\_Eq\_Small\_Large\_Control 1.38 1.41BM\_StringRelational\_Eq\_Small\_Huge\_Control 1.37 1.39BM\_StringRelational\_Eq\_Large\_Large\_Control 5.54 5.8BM\_StringRelational\_Eq\_Large\_Large\_ChangeFirst 6.25 3.06BM\_StringRelational\_Eq\_Large\_Large\_ChangeMiddle 5.5 5.94BM\_StringRelational\_Eq\_Large\_Large\_ChangeLast 6.04 3.42BM\_StringRelational\_Eq\_Large\_Huge\_Control 1.1 1.1BM\_StringRelational\_Eq\_Huge\_Huge\_Control 118 118BM\_StringRelational\_Eq\_Huge\_Huge\_ChangeFirst 5.65 3.02BM\_StringRelational\_Eq\_Huge\_Huge\_ChangeMiddle 69.5 66.9BM\_StringRelational\_Eq\_Huge\_Huge\_ChangeLast 118 3.43