(original) (raw)



On 07/22/2015 01:28 PM, Sean Silva wrote:


On Wed, Jul 22, 2015 at 12:54 PM, Hal Finkel <hfinkel@anl.gov> wrote:
One thing that is important to consider is where in the pipeline these kinds of optimizations fit. We normally try to put the IR into a canonical simplified form in the mid-level optimizer. This form is supposed to be whatever is most useful for exposing other optimizations, and for lowering, but only in a generic sense. We do have some optimizations near the end of our pipeline (vectorization, partial unrolling, etc.) that consider target-specific properties, but only because the alternative is doing those loop optimizations after instruction selection.

Considering ILP and other pipeline-level costs are something we generally consider only in the SelectionDAG and after. If these are IR optimizations, then I'm not sure that considering ILP, etc. is the right metric -- so long as the transformations are sufficiently reversible to allow of efficient lowering afterward.

Agreed. It might just be that these initial results are from the "burn-in" specifically targeting short simple sequences, but most of the transformations in the link seem to be things that, if applicable, we would want to do in the backend instead of in the middle-end.
Looking through the items, I see a number which are suitable for mid level canonicalization.� For example, the two for converting and/cmps into truncs seem like good candidates.� We need to make sure to apply judgement here, but not \*all\* of these are backend specific.�




-- Sean Silva

�-Hal

\----- Original Message -----
\> From: "Sean Silva" <chisophugis@gmail.com>
\> To: "John Regehr" <regehr@cs.utah.edu>
\> Cc: llvmdev@cs.uiuc.edu
\> Sent: Wednesday, July 22, 2015 2:35:51 PM
\> Subject: Re: \[LLVMdev\] some superoptimizer results
\>
\>
\>
\> Are you taking into account critical path length? Because e.g. for:
\>
\>
\>
\> %0:i64 = var
\> %1:i1 = slt 18446744073709551615:i64, %0
\> %2:i64 = subnsw 0:i64, %0
\> %3:i64 = select %1, %0, %2
\> infer %3
\> %4:i64 = ashr %0, 63:i64
\> %5:i64 = add %0, %4
\> %6:i64 = xor %5, %4
\> result %6
\>
\>
\> In the former case, the cmp and sub are independent, so can be
\> executed in parallel, while in the latter case all 3 instructions
\> are dependent. So the former case can execute in 2 cycles while the
\> latter takes 3\. Modern OoO chips do in fact exploit this kind of
\> thing.
\>
\>
\> -- Sean Silva
\>
\>
\>
\>
> On Wed, Jul 22, 2015 at 10:15 AM, John Regehr < regehr@cs.utah.edu \>
> wrote:
\>
\>
\> We (the folks working on Souper) would appreciate any feedback on
\> these IR-level superoptimizer results:
\>
\> http://blog.regehr.org/extra\_files/souper-jul-15.html
\>
> My impression is that while there's clearly plenty of material in
\> here that doesn't want to get implemented in an opt pass, there are
\> a number of gems hiding in there that are worth implementing.
\>
\> Blog post containing additional explanation and caveats is here:
\>
\> http://blog.regehr.org/archives/1252
\>
> Thanks!
\>
\> John
\>
\> \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
\> LLVM Developers mailing list
> LLVMdev@cs.uiuc.edu http://llvm.cs.uiuc.edu
\> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
\>
\>
> \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
\> LLVM Developers mailing list
--
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory



\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_  
LLVM Developers mailing list  
LLVMdev@cs.uiuc.edu http://llvm.cs.uiuc.edu  
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev