LLVM: lib/Target/AMDGPU/AMDGPUUniformIntrinsicCombine.cpp File Reference (original) (raw)

This pass simplifies certain intrinsic calls when the arguments are uniform. More...

Go to the source code of this file.

Macros
#define DEBUG_TYPE "amdgpu-uniform-intrinsic-combine"
Functions
static bool isDivergentUseWithNew (const Use &U, const UniformityInfo &UI, const ValueMap< const Value *, bool > &Tracker)
Wrapper for querying uniformity info that first checks locally tracked instructions.
static bool optimizeUniformIntrinsic (IntrinsicInst &II, const UniformityInfo &UI, ValueMap< const Value *, bool > &Tracker)
Optimizes uniform intrinsics calls if their operand can be proven uniform.
static bool runUniformIntrinsicCombine (Function &F, const UniformityInfo &UI)
Iterates over intrinsic calls in the Function to optimize.
INITIALIZE_PASS_BEGIN (AMDGPUUniformIntrinsicCombineLegacy, DEBUG_TYPE, "AMDGPU Uniform Intrinsic Combine", false, false) INITIALIZE_PASS_END(AMDGPUUniformIntrinsicCombineLegacy
Variables
DEBUG_TYPE
AMDGPU Uniform Intrinsic Combine
AMDGPU Uniform Intrinsic false

This pass simplifies certain intrinsic calls when the arguments are uniform.

It's true that this pass has transforms that can lead to a situation where some instruction whose operand was previously recognized as statically uniform is later on no longer recognized as statically uniform. However, the semantics of how programs execute don't (and must not, for this precise reason) care about static uniformity, they only ever care about dynamic uniformity. And every instruction that's downstream and cares about dynamic uniformity must be convergent (and isel will introduce v_readfirstlane for them if their operands can't be proven statically uniform).

Definition in file AMDGPUUniformIntrinsicCombine.cpp.

DEBUG_TYPE

#define DEBUG_TYPE "amdgpu-uniform-intrinsic-combine"

INITIALIZE_PASS_BEGIN()

INITIALIZE_PASS_BEGIN ( AMDGPUUniformIntrinsicCombineLegacy ,
DEBUG_TYPE ,
"AMDGPU Uniform Intrinsic Combine" ,
false ,
false )

isDivergentUseWithNew()

optimizeUniformIntrinsic()

Optimizes uniform intrinsics calls if their operand can be proven uniform.

We deliberately do not simplify readfirstlane with a uniform argument, so that frontends can use it to force a copy to SGPR and thereby prevent the backend from generating unwanted waterfall loops.

Definition at line 56 of file AMDGPUUniformIntrinsicCombine.cpp.

References Changed, llvm::BinaryOperator::CreateNot(), llvm::dbgs(), llvm::dyn_cast(), llvm::CmpInst::ICMP_EQ, llvm::CmpInst::ICMP_NE, II, isDivergentUseWithNew(), LLVM_DEBUG, llvm::PatternMatch::m_Zero(), llvm::make_early_inc_range(), and llvm::PatternMatch::match().

Referenced by runUniformIntrinsicCombine().

runUniformIntrinsicCombine()

Combine

AMDGPU Uniform Intrinsic Combine

DEBUG_TYPE

false

AMDGPU Uniform Intrinsic false