LLVM: lib/Transforms/Scalar/LoopStrengthReduce.cpp File Reference (original) (raw)
Go to the source code of this file.
| Macros | |
|---|---|
| #define | DEBUG_TYPE "loop-reduce" |
| Functions | |
|---|---|
| static void | DoInitialMatch (const SCEV *S, Loop *L, SmallVectorImpl< const SCEV * > &Good, SmallVectorImpl< const SCEV * > &Bad, ScalarEvolution &SE) |
| Recursion helper for initialMatch. | |
| static bool | containsAddRecDependentOnLoop (const SCEV *S, const Loop &L) |
| static bool | isAddRecSExtable (const SCEVAddRecExpr *AR, ScalarEvolution &SE) |
| Return true if the given addrec can be sign-extended without changing its value. | |
| static bool | isAddSExtable (const SCEVAddExpr *A, ScalarEvolution &SE) |
| Return true if the given add can be sign-extended without changing its value. | |
| static bool | isMulSExtable (const SCEVMulExpr *M, ScalarEvolution &SE) |
| Return true if the given mul can be sign-extended without changing its value. | |
| static const SCEV * | getExactSDiv (const SCEV *LHS, const SCEV *RHS, ScalarEvolution &SE, bool IgnoreSignificantBits=false) |
| Return an expression for LHS /s RHS, if it can be determined and if the remainder is known to be zero, or null otherwise. | |
| static Immediate | ExtractImmediate (const SCEV *&S, ScalarEvolution &SE) |
| If S involves the addition of a constant integer value, return that integer value, and mutate S to point to a new SCEV with that value excluded. | |
| static GlobalValue * | ExtractSymbol (const SCEV *&S, ScalarEvolution &SE) |
| If S involves the addition of a GlobalValue address, return that symbol, and mutate S to point to a new SCEV with that value excluded. | |
| static bool | isAddressUse (const TargetTransformInfo &TTI, Instruction *Inst, Value *OperandVal) |
| Returns true if the specified instruction is using the specified value as an address. | |
| static MemAccessTy | getAccessType (const TargetTransformInfo &TTI, Instruction *Inst, Value *OperandVal) |
| Return the type of the memory being accessed. | |
| static bool | isExistingPhi (const SCEVAddRecExpr *AR, ScalarEvolution &SE) |
| Return true if this AddRec is already a phi in its loop. | |
| static bool | isHighCostExpansion (const SCEV *S, SmallPtrSetImpl< const SCEV * > &Processed, ScalarEvolution &SE) |
| Check if expanding this expression is likely to incur significant cost. | |
| static bool | isAMCompletelyFolded (const TargetTransformInfo &TTI, const LSRUse &LU, const Formula &F) |
| Check if the addressing mode defined by F is completely folded in LU at isel time. | |
| static InstructionCost | getScalingFactorCost (const TargetTransformInfo &TTI, const LSRUse &LU, const Formula &F, const Loop &L) |
| static bool | isAMCompletelyFolded (const TargetTransformInfo &TTI, LSRUse::KindType Kind, MemAccessTy AccessTy, GlobalValue *BaseGV, Immediate BaseOffset, bool HasBaseReg, int64_t Scale, Instruction *Fixup=nullptr) |
| static unsigned | getSetupCost (const SCEV *Reg, unsigned Depth) |
| static bool | isAMCompletelyFolded (const TargetTransformInfo &TTI, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, GlobalValue *BaseGV, Immediate BaseOffset, bool HasBaseReg, int64_t Scale) |
| static bool | isAMCompletelyFolded (const TargetTransformInfo &TTI, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, const Formula &F, const Loop &L) |
| static bool | isLegalUse (const TargetTransformInfo &TTI, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, GlobalValue *BaseGV, Immediate BaseOffset, bool HasBaseReg, int64_t Scale) |
| Test whether we know how to expand the current formula. | |
| static bool | isLegalUse (const TargetTransformInfo &TTI, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, const Formula &F) |
| static bool | isLegalAddImmediate (const TargetTransformInfo &TTI, Immediate Offset) |
| static bool | isAlwaysFoldable (const TargetTransformInfo &TTI, LSRUse::KindType Kind, MemAccessTy AccessTy, GlobalValue *BaseGV, Immediate BaseOffset, bool HasBaseReg) |
| static bool | isAlwaysFoldable (const TargetTransformInfo &TTI, ScalarEvolution &SE, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, const SCEV *S, bool HasBaseReg) |
| static User::op_iterator | findIVOperand (User::op_iterator OI, User::op_iterator OE, Loop *L, ScalarEvolution &SE) |
| Helper for CollectChains that finds an IV operand (computed by an AddRec in this loop) within [OI,OE) or returns OE. | |
| static Value * | getWideOperand (Value *Oper) |
| IVChain logic must consistently peek base TruncInst operands, so wrap it in a convenient helper. | |
| static const SCEV * | getExprBase (const SCEV *S) |
| Return an approximation of this SCEV expression's "base", or NULL for any constant. | |
| static bool | isProfitableChain (IVChain &Chain, SmallPtrSetImpl< Instruction * > &Users, ScalarEvolution &SE, const TargetTransformInfo &TTI) |
| Return true if the number of registers needed for the chain is estimated to be less than the number required for the individual IV users. | |
| static bool | canFoldIVIncExpr (const SCEV *IncExpr, Instruction *UserInst, Value *Operand, const TargetTransformInfo &TTI) |
| Return true if the IVInc can be folded into an addressing mode. | |
| static const SCEV * | CollectSubexprs (const SCEV *S, const SCEVConstant *C, SmallVectorImpl< const SCEV * > &Ops, const Loop *L, ScalarEvolution &SE, unsigned Depth=0) |
| Split S into subexpressions which can be pulled out into separate registers. | |
| static bool | mayUsePostIncMode (const TargetTransformInfo &TTI, LSRUse &LU, const SCEV *S, const Loop *L, ScalarEvolution &SE) |
| Return true if the SCEV represents a value that may end up as a post-increment operation. | |
| static const SCEV * | getAnyExtendConsideringPostIncUses (ArrayRef< PostIncLoopSet > Loops, const SCEV *Expr, Type *ToTy, ScalarEvolution &SE) |
| Extend/Truncate Expr to ToTy considering post-inc uses in Loops. | |
| static bool | IsSimplerBaseSCEVForTarget (const TargetTransformInfo &TTI, ScalarEvolution &SE, const SCEV *Best, const SCEV *Reg, MemAccessTy AccessType) |
| static Instruction * | getFixupInsertPos (const TargetTransformInfo &TTI, const LSRFixup &Fixup, const LSRUse &LU, Instruction *IVIncInsertPos, DominatorTree &DT) |
| static unsigned | numLLVMArgOps (SmallVectorImpl< uint64_t > &Expr) |
| Returns the total number of DW_OP_llvm_arg operands in the expression. | |
| template<typename T> | |
| static void | updateDVIWithLocation (T &DbgVal, Value *Location, SmallVectorImpl< uint64_t > &Ops) |
| Overwrites DVI with the location and Ops as the DIExpression. | |
| template<typename T> | |
| static void | updateDVIWithLocations (T &DbgVal, SmallVectorImpl< Value * > &Locations, SmallVectorImpl< uint64_t > &Ops) |
| Overwrite DVI with locations placed into a DIArglist. | |
| static void | UpdateDbgValue (DVIRecoveryRec &DVIRec, SmallVectorImpl< Value * > &NewLocationOps, SmallVectorImpl< uint64_t > &NewExpr) |
| Write the new expression and new location ops for the dbg.value. | |
| static Value * | getValueOrPoison (WeakVH &VH, LLVMContext &C) |
| Cached location ops may be erased during LSR, in which case a poison is required when restoring from the cache. | |
| static void | restorePreTransformState (DVIRecoveryRec &DVIRec) |
| Restore the DVI's pre-LSR arguments. Substitute undef for any erased values. | |
| static bool | SalvageDVI (llvm::Loop *L, ScalarEvolution &SE, llvm::PHINode *LSRInductionVar, DVIRecoveryRec &DVIRec, const SCEV *SCEVInductionVar, SCEVDbgValueBuilder IterCountExpr) |
| static void | DbgRewriteSalvageableDVIs (llvm::Loop *L, ScalarEvolution &SE, llvm::PHINode *LSRInductionVar, SmallVector< std::unique_ptr< DVIRecoveryRec >, 2 > &DVIToUpdate) |
| Obtain an expression for the iteration count, then attempt to salvage the dbg.value intrinsics. | |
| static void | DbgGatherSalvagableDVI (Loop *L, ScalarEvolution &SE, SmallVector< std::unique_ptr< DVIRecoveryRec >, 2 > &SalvageableDVISCEVs) |
| Identify and cache salvageable DVI locations and expressions along with the corresponding SCEV(s). | |
| static llvm::PHINode * | GetInductionVariable (const Loop &L, ScalarEvolution &SE, const LSRInstance &LSR) |
| Ideally pick the PHI IV inserted by ScalarEvolutionExpander. | |
| static bool | ReduceLoopStrength (Loop *L, IVUsers &IU, ScalarEvolution &SE, DominatorTree &DT, LoopInfo &LI, const TargetTransformInfo &TTI, AssumptionCache &AC, TargetLibraryInfo &TLI, MemorySSA *MSSA) |
| INITIALIZE_PASS_BEGIN (LoopStrengthReduce, "loop-reduce", "Loop Strength Reduction", false, false) INITIALIZE_PASS_END(LoopStrengthReduce |
| Variables | |
|---|---|
| static const unsigned | MaxIVUsers = 200 |
| MaxIVUsers is an arbitrary threshold that provides an early opportunity for bail out. | |
| static const unsigned | MaxSCEVSalvageExpressionSize = 64 |
| Limit the size of expression that SCEV-based salvaging will attempt to translate into a DIExpression. | |
| static cl::opt< bool > | EnablePhiElim ("enable-lsr-phielim", cl::Hidden, cl::init(true), cl::desc("Enable LSR phi elimination")) |
| static cl::opt< bool > | InsnsCost ("lsr-insns-cost", cl::Hidden, cl::init(true), cl::desc("Add instruction count to a LSR cost model")) |
| static cl::opt< bool > | LSRExpNarrow ("lsr-exp-narrow", cl::Hidden, cl::init(false), cl::desc("Narrow LSR complex solution using" " expectation of registers number")) |
| static cl::opt< bool > | FilterSameScaledReg ("lsr-filter-same-scaled-reg", cl::Hidden, cl::init(true), cl::desc("Narrow LSR search space by filtering non-optimal formulae" " with the same ScaledReg and Scale")) |
| static cl::opt< TTI::AddressingModeKind > | PreferredAddresingMode ("lsr-preferred-addressing-mode", cl::Hidden, cl::init(TTI::AMK_None), cl::desc("A flag that overrides the target's preferred addressing mode."), cl::values(clEnumValN(TTI::AMK_None, "none", "Don't prefer any addressing mode"), clEnumValN(TTI::AMK_PreIndexed, "preindexed", "Prefer pre-indexed addressing mode"), clEnumValN(TTI::AMK_PostIndexed, "postindexed", "Prefer post-indexed addressing mode"), clEnumValN(TTI::AMK_All, "all", "Consider all addressing modes"))) |
| static cl::opt< unsigned > | ComplexityLimit ("lsr-complexity-limit", cl::Hidden, cl::init(std::numeric_limits< uint16_t >::max()), cl::desc("LSR search space complexity limit")) |
| static cl::opt< unsigned > | SetupCostDepthLimit ("lsr-setupcost-depth-limit", cl::Hidden, cl::init(7), cl::desc("The limit on recursion depth for LSRs setup cost")) |
| static cl::opt< cl::boolOrDefault > | AllowDropSolutionIfLessProfitable ("lsr-drop-solution", cl::Hidden, cl::desc("Attempt to drop solution if it is less profitable")) |
| static cl::opt< bool > | EnableVScaleImmediates ("lsr-enable-vscale-immediates", cl::Hidden, cl::init(true), cl::desc("Enable analysis of vscale-relative immediates in LSR")) |
| static cl::opt< bool > | DropScaledForVScale ("lsr-drop-scaled-reg-for-vscale", cl::Hidden, cl::init(true), cl::desc("Avoid using scaled registers with vscale-relative addressing")) |
| static cl::opt< bool > | StressIVChain ("stress-ivchain", cl::Hidden, cl::init(false), cl::desc("Stress test LSR IV chains")) |
| loop | reduce |
| loop Loop Strength | Reduction |
| loop Loop Strength | false |
◆ DEBUG_TYPE
#define DEBUG_TYPE "loop-reduce"
◆ canFoldIVIncExpr()
Return true if the IVInc can be folded into an addressing mode.
Definition at line 3368 of file LoopStrengthReduce.cpp.
References llvm::CallingConv::C, llvm::dyn_cast(), getAccessType(), llvm::SCEVConstant::getAPInt(), llvm::ConstantInt::getSExtValue(), llvm::APInt::getSignificantBits(), llvm::SCEVConstant::getValue(), isAddressUse(), isAlwaysFoldable(), llvm::SCEVPatternMatch::m_scev_APInt(), llvm::SCEVPatternMatch::m_scev_Mul(), llvm::SCEVPatternMatch::m_SCEVVScale(), and llvm::SCEVPatternMatch::match().
◆ CollectSubexprs()
Split S into subexpressions which can be pulled out into separate registers.
If C is non-null, multiply each subexpression by C.
Return remainder expression after factoring the subexpressions captured by Ops. If Ops is complete, return NULL.
Definition at line 3857 of file LoopStrengthReduce.cpp.
References AbstractManglingParser< Derived, Alloc >::Ops, llvm::Add, llvm::CallingConv::C, llvm::cast(), CollectSubexprs(), llvm::Depth, llvm::dyn_cast(), llvm::SCEV::FlagAnyWrap, llvm::ScalarEvolution::getAddRecExpr(), llvm::ScalarEvolution::getConstant(), llvm::ScalarEvolution::getMulExpr(), llvm::SCEV::getType(), llvm::isa(), llvm::SCEVPatternMatch::m_SCEV(), llvm::SCEVPatternMatch::m_scev_AffineAddRec(), llvm::SCEVPatternMatch::m_scev_Mul(), llvm::SCEVPatternMatch::m_SCEVConstant(), and llvm::SCEVPatternMatch::match().
Referenced by CollectSubexprs().
◆ containsAddRecDependentOnLoop()
◆ DbgGatherSalvagableDVI()
◆ DbgRewriteSalvageableDVIs()
◆ DoInitialMatch()
Recursion helper for initialMatch.
Definition at line 540 of file LoopStrengthReduce.cpp.
References AbstractManglingParser< Derived, Alloc >::Ops, llvm::Add, DoInitialMatch(), llvm::drop_begin(), llvm::dyn_cast(), llvm::SCEV::FlagAnyWrap, llvm::ScalarEvolution::getAddRecExpr(), llvm::Constant::getAllOnesValue(), llvm::ScalarEvolution::getConstant(), llvm::ScalarEvolution::getEffectiveSCEVType(), llvm::ScalarEvolution::getMulExpr(), llvm::ScalarEvolution::getSCEV(), llvm::SCEV::getType(), llvm::SCEVPatternMatch::m_Loop(), llvm::SCEVPatternMatch::m_SCEV(), llvm::SCEVPatternMatch::m_scev_AffineAddRec(), llvm::SCEVPatternMatch::match(), Mul, llvm::ScalarEvolution::properlyDominates(), and llvm::SmallVectorTemplateBase< T, bool >::push_back().
Referenced by DoInitialMatch().
◆ ExtractImmediate()
If S involves the addition of a constant integer value, return that integer value, and mutate S to point to a new SCEV with that value excluded.
Definition at line 935 of file LoopStrengthReduce.cpp.
References llvm::Add, llvm::CallingConv::C, llvm::dyn_cast(), EnableVScaleImmediates, ExtractImmediate(), llvm::SCEV::FlagAnyWrap, llvm::SmallVectorTemplateCommon< T, typename >::front(), llvm::ScalarEvolution::getAddExpr(), llvm::ScalarEvolution::getAddRecExpr(), llvm::ScalarEvolution::getConstant(), llvm::SCEV::getType(), llvm::SCEVPatternMatch::m_scev_APInt(), llvm::SCEVPatternMatch::m_scev_Mul(), llvm::SCEVPatternMatch::m_SCEVVScale(), and llvm::SCEVPatternMatch::match().
Referenced by ExtractImmediate(), and isAlwaysFoldable().
◆ ExtractSymbol()
If S involves the addition of a GlobalValue address, return that symbol, and mutate S to point to a new SCEV with that value excluded.
Definition at line 966 of file LoopStrengthReduce.cpp.
References llvm::Add, llvm::SmallVectorTemplateCommon< T, typename >::back(), llvm::dyn_cast(), ExtractSymbol(), llvm::SCEV::FlagAnyWrap, llvm::SmallVectorTemplateCommon< T, typename >::front(), llvm::ScalarEvolution::getAddExpr(), llvm::ScalarEvolution::getAddRecExpr(), and llvm::ScalarEvolution::getConstant().
Referenced by ExtractSymbol(), and isAlwaysFoldable().
◆ findIVOperand()
◆ getAccessType()
◆ getAnyExtendConsideringPostIncUses()
◆ getExactSDiv()
Return an expression for LHS /s RHS, if it can be determined and if the remainder is known to be zero, or null otherwise.
If IgnoreSignificantBits is true, expressions like (X * Y) /s Y are simplified to X, ignoring that the multiplication may overflow, which is useful when the result will be used in a context where the most significant bits are ignored.
Definition at line 830 of file LoopStrengthReduce.cpp.
References AbstractManglingParser< Derived, Alloc >::Ops, llvm::Add, llvm::CallingConv::C, llvm::drop_begin(), llvm::dyn_cast(), llvm::SCEV::FlagAnyWrap, llvm::ScalarEvolution::getAddExpr(), llvm::ScalarEvolution::getAddRecExpr(), llvm::SCEVConstant::getAPInt(), llvm::ScalarEvolution::getConstant(), getExactSDiv(), llvm::ScalarEvolution::getMulExpr(), isAddRecSExtable(), isAddSExtable(), isMulSExtable(), LHS, Mul, RA, RHS, llvm::APInt::sdiv(), and llvm::APInt::srem().
Referenced by getExactSDiv().
◆ getExprBase()
Return an approximation of this SCEV expression's "base", or NULL for any constant.
Returning the expression itself is conservative. Returning a deeper subexpression is more precise and valid as long as it isn't less complex than another subexpression. For expressions involving multiple unscaled values, we need to return the pointer-type SCEVUnknown. This avoids forming chains across objects, such as: PrevOper==a[i], IVOper==b[i], IVInc==b-a.
Since SCEVUnknown is the rightmost type, and pointers are the rightmost SCEVUnknown, we simply return the rightmost SCEV operand.
Definition at line 3009 of file LoopStrengthReduce.cpp.
References llvm::Add, llvm::cast(), getExprBase(), llvm::SCEV::getSCEVType(), llvm_unreachable, llvm::reverse(), llvm::scAddExpr, llvm::scAddRecExpr, llvm::scConstant, llvm::scMulExpr, llvm::scSignExtend, llvm::scTruncate, llvm::scVScale, and llvm::scZeroExtend.
Referenced by getExprBase().
◆ getFixupInsertPos()
◆ GetInductionVariable()
◆ getScalingFactorCost()
◆ getSetupCost()
◆ getValueOrPoison()
◆ getWideOperand()
◆ INITIALIZE_PASS_BEGIN()
| INITIALIZE_PASS_BEGIN | ( | LoopStrengthReduce | , |
|---|---|---|---|
| "loop-reduce" | , | ||
| "Loop Strength Reduction" | , | ||
| false | , | ||
| false | ) |
◆ isAddRecSExtable()
◆ isAddressUse()
◆ isAddSExtable()
◆ isAlwaysFoldable() [1/2]
◆ isAlwaysFoldable() [2/2]
◆ isAMCompletelyFolded() [1/4]
Check if the addressing mode defined by F is completely folded in LU at isel time.
This includes address-mode folding and special icmp tricks. This function returns true if LU can accommodate what F defines and up to 1 base + 1 scaled + offset. In other words, if F has several base registers, this function may still return true. Therefore, users still need to account for additional base registers and/or unfolded offsets to derive an accurate cost model.
Definition at line 1945 of file LoopStrengthReduce.cpp.
References F, Fixup, and isAMCompletelyFolded().
Referenced by getScalingFactorCost(), isAlwaysFoldable(), isAlwaysFoldable(), isAMCompletelyFolded(), isAMCompletelyFolded(), isAMCompletelyFolded(), and isLegalUse().
◆ isAMCompletelyFolded() [2/4]
◆ isAMCompletelyFolded() [3/4]
| bool isAMCompletelyFolded ( const TargetTransformInfo & TTI, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, GlobalValue * BaseGV, Immediate BaseOffset, bool HasBaseReg, int64_t Scale ) | static |
|---|
◆ isAMCompletelyFolded() [4/4]
◆ isExistingPhi()
◆ isHighCostExpansion()
Check if expanding this expression is likely to incur significant cost.
This is tricky because SCEV doesn't track which expressions are actually computed by the current IR.
We currently allow expansion of IV increments that involve adds, multiplication by constants, and AddRecs from existing phis.
TODO: Allow UDivExpr if we can find an existing IV increment that is an obvious multiple of the UDivExpr.
Definition at line 1110 of file LoopStrengthReduce.cpp.
References llvm::Add, llvm::cast(), llvm::dyn_cast(), llvm::Instruction::getOpcode(), llvm::ScalarEvolution::getSCEV(), llvm::SCEV::getSCEVType(), llvm::Value::getType(), llvm::SmallPtrSetImpl< PtrType >::insert(), llvm::isa(), isExistingPhi(), isHighCostExpansion(), llvm::ScalarEvolution::isSCEVable(), llvm::SCEVPatternMatch::m_SCEV(), llvm::SCEVPatternMatch::m_scev_Mul(), llvm::SCEVPatternMatch::match(), llvm::scConstant, llvm::scSignExtend, llvm::scTruncate, llvm::scUnknown, llvm::scVScale, llvm::scZeroExtend, and llvm::Value::users().
Referenced by isHighCostExpansion().
◆ isLegalAddImmediate()
◆ isLegalUse() [1/2]
◆ isLegalUse() [2/2]
| bool isLegalUse ( const TargetTransformInfo & TTI, Immediate MinOffset, Immediate MaxOffset, LSRUse::KindType Kind, MemAccessTy AccessTy, GlobalValue * BaseGV, Immediate BaseOffset, bool HasBaseReg, int64_t Scale ) | static |
|---|
◆ isMulSExtable()
◆ isProfitableChain()
Return true if the number of registers needed for the chain is estimated to be less than the number required for the individual IV users.
First prohibit any IV users that keep the IV live across increments (the Users set should be empty). Next count the number and type of increments in the chain.
Chaining IVs can lead to considerable code bloat if ISEL doesn't effectively use postinc addressing modes. Only consider it profitable it the increments can be computed in fewer registers when chained.
TODO: Consider IVInc free if it's already used in another chains.
Definition at line 3076 of file LoopStrengthReduce.cpp.
References assert(), llvm::dbgs(), for(), llvm::ScalarEvolution::getSCEV(), llvm::isa(), LLVM_DEBUG, StressIVChain, and Users.
◆ IsSimplerBaseSCEVForTarget()
◆ mayUsePostIncMode()
◆ numLLVMArgOps()
◆ ReduceLoopStrength()
Definition at line 7038 of file LoopStrengthReduce.cpp.
References Changed, llvm::SmallVectorImpl< T >::clear(), DbgGatherSalvagableDVI(), DbgRewriteSalvageableDVIs(), llvm::dbgs(), DEBUG_TYPE, llvm::DeleteDeadPHIs(), DL, llvm::SmallVectorTemplateCommon< T, typename >::empty(), EnablePhiElim, GetInductionVariable(), IV, LLVM_DEBUG, llvm::RecursivelyDeleteTriviallyDeadInstructionsPermissive(), llvm::rewriteLoopExitValues(), Rewriter, and llvm::UnusedIndVarInLoop.
Referenced by llvm::LoopStrengthReducePass::run().
◆ restorePreTransformState()
| void restorePreTransformState ( DVIRecoveryRec & DVIRec) | static |
|---|
Restore the DVI's pre-LSR arguments. Substitute undef for any erased values.
Definition at line 6776 of file LoopStrengthReduce.cpp.
References assert(), llvm::dbgs(), llvm::DIArgList::get(), llvm::ValueAsMetadata::get(), llvm::DbgRecord::getContext(), getValueOrPoison(), LLVM_DEBUG, llvm::SmallVectorTemplateBase< T, bool >::push_back(), llvm::DbgVariableRecord::setExpression(), and llvm::DbgVariableRecord::setRawLocation().
Referenced by SalvageDVI().
◆ SalvageDVI()
Definition at line 6807 of file LoopStrengthReduce.cpp.
References assert(), llvm::SmallVectorImpl< T >::assign(), B(), llvm::ScalarEvolution::computeConstantDifference(), llvm::ScalarEvolution::containsErasedValue(), llvm::ScalarEvolution::containsUndefs(), llvm::dbgs(), llvm::dwarf::DW_OP_LLVM_arg, llvm::DIExpression::expr_ops(), llvm::DIExpression::getNumElements(), llvm::isa(), llvm::DbgVariableRecord::isKillLocation(), LLVM_DEBUG, llvm::Offset, llvm::SmallVectorTemplateBase< T, bool >::push_back(), restorePreTransformState(), llvm::SmallVectorTemplateCommon< T, typename >::size(), and UpdateDbgValue().
Referenced by DbgRewriteSalvageableDVIs().
◆ UpdateDbgValue()
Write the new expression and new location ops for the dbg.value.
If possible reduce the szie of the dbg.value by omitting DIArglist. This can be omitted if:
- There is only a single location, refenced by a single DW_OP_llvm_arg.
- The DW_OP_LLVM_arg is the first operand in the expression.
Definition at line 6736 of file LoopStrengthReduce.cpp.
References llvm::DIExpression::append(), assert(), llvm::drop_begin(), llvm::dwarf::DW_OP_LLVM_arg, llvm::DbgVariableRecord::getExpression(), llvm::DIExpression::isComplex(), numLLVMArgOps(), llvm::DbgVariableRecord::setExpression(), updateDVIWithLocation(), and updateDVIWithLocations().
Referenced by SalvageDVI().
◆ updateDVIWithLocation()
◆ updateDVIWithLocations()
◆ AllowDropSolutionIfLessProfitable
| cl::opt< cl::boolOrDefault > AllowDropSolutionIfLessProfitable("lsr-drop-solution", cl::Hidden, cl::desc("Attempt to drop solution if it is less profitable")) ( "lsr-drop-solution" , cl::Hidden , cl::desc("Attempt to drop solution if it is less profitable") ) | static |
|---|
◆ ComplexityLimit
| cl::opt< unsigned > ComplexityLimit("lsr-complexity-limit", cl::Hidden, cl::init(std::numeric_limits< uint16_t >::max()), cl::desc("LSR search space complexity limit")) ( "lsr-complexity-limit" , cl::Hidden , cl::init(std::numeric_limits< uint16_t >::max()) , cl::desc("LSR search space complexity limit") ) | static |
|---|
◆ DropScaledForVScale
| cl::opt< bool > DropScaledForVScale("lsr-drop-scaled-reg-for-vscale", cl::Hidden, cl::init(true), cl::desc("Avoid using scaled registers with vscale-relative addressing")) ( "lsr-drop-scaled-reg-for-vscale" , cl::Hidden , cl::init(true) , cl::desc("Avoid using scaled registers with vscale-relative addressing") ) | static |
|---|
◆ EnablePhiElim
| cl::opt< bool > EnablePhiElim("enable-lsr-phielim", cl::Hidden, cl::init(true), cl::desc("Enable LSR phi elimination")) ( "enable-lsr-phielim" , cl::Hidden , cl::init(true) , cl::desc("Enable LSR phi elimination") ) | static |
|---|
◆ EnableVScaleImmediates
| cl::opt< bool > EnableVScaleImmediates("lsr-enable-vscale-immediates", cl::Hidden, cl::init(true), cl::desc("Enable analysis of vscale-relative immediates in LSR")) ( "lsr-enable-vscale-immediates" , cl::Hidden , cl::init(true) , cl::desc("Enable analysis of vscale-relative immediates in LSR") ) | static |
|---|
◆ false
◆ FilterSameScaledReg
| cl::opt< bool > FilterSameScaledReg("lsr-filter-same-scaled-reg", cl::Hidden, cl::init(true), cl::desc("Narrow LSR search space by filtering non-optimal formulae" " with the same ScaledReg and Scale")) ( "lsr-filter-same-scaled-reg" , cl::Hidden , cl::init(true) , cl::desc("Narrow LSR search space by filtering non-optimal formulae" " with the same ScaledReg and Scale") ) | static |
|---|
◆ InsnsCost
| cl::opt< bool > InsnsCost("lsr-insns-cost", cl::Hidden, cl::init(true), cl::desc("Add instruction count to a LSR cost model")) ( "lsr-insns-cost" , cl::Hidden , cl::init(true) , cl::desc("Add instruction count to a LSR cost model") ) | static |
|---|
◆ LSRExpNarrow
| cl::opt< bool > LSRExpNarrow("lsr-exp-narrow", cl::Hidden, cl::init(false), cl::desc("Narrow LSR complex solution using" " expectation of registers number")) ( "lsr-exp-narrow" , cl::Hidden , cl::init(false) , cl::desc("Narrow LSR complex solution using" " expectation of registers number") ) | static |
|---|
◆ MaxIVUsers
MaxIVUsers is an arbitrary threshold that provides an early opportunity for bail out.
This threshold is far beyond the number of users that LSR can conceivably solve, so it should not affect generated code, but catches the worst cases before LSR burns too much compile time and stack space.
Definition at line 138 of file LoopStrengthReduce.cpp.
◆ MaxSCEVSalvageExpressionSize
Limit the size of expression that SCEV-based salvaging will attempt to translate into a DIExpression.
Choose a maximum size such that debuginfo is not excessively increased and the salvaging is not too expensive for the compiler.
Definition at line 144 of file LoopStrengthReduce.cpp.
Referenced by DbgRewriteSalvageableDVIs().
◆ PreferredAddresingMode
| cl::opt< TTI::AddressingModeKind > PreferredAddresingMode("lsr-preferred-addressing-mode", cl::Hidden, cl::init(TTI::AMK_None), cl::desc("A flag that overrides the target's preferred addressing mode."), cl::values( clEnumValN(TTI::AMK_None, "none", "Don't prefer any addressing mode"), clEnumValN(TTI::AMK_PreIndexed, "preindexed", "Prefer pre-indexed addressing mode"), clEnumValN(TTI::AMK_PostIndexed, "postindexed", "Prefer post-indexed addressing mode"), clEnumValN(TTI::AMK_All, "all", "Consider all addressing modes"))) ( "lsr-preferred-addressing-mode" , cl::Hidden , cl::init(TTI::AMK_None) , cl::desc("A flag that overrides the target's preferred addressing mode.") , cl::values( clEnumValN(TTI::AMK_None, "none", "Don't prefer any addressing mode"), clEnumValN(TTI::AMK_PreIndexed, "preindexed", "Prefer pre-indexed addressing mode"), clEnumValN(TTI::AMK_PostIndexed, "postindexed", "Prefer post-indexed addressing mode"), clEnumValN(TTI::AMK_All, "all", "Consider all addressing modes")) ) | static |
|---|
◆ reduce
◆ Reduction
loop Loop Strength Reduction
◆ SetupCostDepthLimit
| cl::opt< unsigned > SetupCostDepthLimit("lsr-setupcost-depth-limit", cl::Hidden, cl::init(7), cl::desc("The limit on recursion depth for LSRs setup cost")) ( "lsr-setupcost-depth-limit" , cl::Hidden , cl::init(7) , cl::desc("The limit on recursion depth for LSRs setup cost") ) | static |
|---|
◆ StressIVChain
| cl::opt< bool > StressIVChain("stress-ivchain", cl::Hidden, cl::init(false), cl::desc("Stress test LSR IV chains")) ( "stress-ivchain" , cl::Hidden , cl::init(false) , cl::desc("Stress test LSR IV chains") ) | static |
|---|