Redundant barrier elimination (original) (raw)
Vladimir Kozlov vladimir.kozlov at oracle.com
Wed Feb 12 11:41:17 PST 2014
- Previous message: Redundant barrier elimination
- Next message: A simple optimization proposal
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
Hi, Doug
Thank you for your suggestion. I filed RFE:
https://bugs.openjdk.java.net/browse/JDK-8034810
Thanks, Vladimir
On 2/12/14 6:40 AM, Doug Lea wrote:
While exploring JMM update options, we noticed that hotspot will tend to pile up memory barrier nodes without bothering to coalesce them into one barrier instruction. Considering the likelihood that JMM updates will accentuate pile-ups, I looked into improvements. Currently, there is only one case where coalescing is attempted. Matcher::poststoreloadbarrier does a TSO-specific forward pass, that handles only MemBarVolatile. This is a harder case than others, because it takes into account that other MemBars are no-ops on TSO. It is (or should be) called only from dfa on x86 and sparc. So it does not apply on processors for which MemBarAcquire and MemBarRelease are not no-ops. But for all (known) processors, you can always do an easier check for redundancy, buttressed by hardware-model-specific ones like poststoreloadbarrier when applicable. I put together the following, that does a basic check, but I don't offhand know of a cpu-independent place to call it from. Needing to invoke this from each barrier case in each .ad file seems suboptimal. Any advice would be welcome. Or perhaps suggestions about placing similar functionality somewhere other than Matcher? Thanks! ... diffs from JDK9 (warning: I haven't even tried to compile this) diff -r 4c8bda53850f src/share/vm/opto/matcher.cpp --- a/src/share/vm/opto/matcher.cpp Thu Feb 06 13:08:44 2014 -0800 +++ b/src/share/vm/opto/matcher.cpp Wed Feb 12 09:07:17 2014 -0500 @@ -2393,6 +2393,54 @@ return false; } +// Detect if current barrier is redundant. Returns true if there is +// another upcoming barrier or atomic operation with at least the same +// properties before next store or load. Assumes that MemBarVolatile +// and CompareAndSwap* provide "full" fences, and that non-biased +// FastLock/Unlock provide acquire/release +bool Matcher::isredundantbarrier(const Node* vmb) { + Compile* C = Compile::current(); + assert(vmb->isMemBar(), ""); + const MemBarNode* membar = vmb->asMemBar(); + int vop = vmb->Opcode(); + + // Get the Ideal Proj node, ctrl, that can be used to iterate forward + Node* ctrl = NULL; + for (DUIteratorFast imax, i = membar->fastouts(imax); i < imax; i++) {_ _+ Node* p = membar->fastout(i); + assert(p->isProj(), "only projections here"); + if ((p->asProj()->con == TypeFunc::Control) && + !C->nodearena()->contains(p)) { // Unmatched old-space only + ctrl = p; + break; + } + } + assert((ctrl != NULL), "missing control projection"); + + for (DUIteratorFast jmax, j = ctrl->fastouts(jmax); j < jmax; j++) {_ _+ Node *x = ctrl->fastout(j); + int xop = x->Opcode(); + + if (xop == vop || + xop == OpMemBarVolatile || + xop == OpCompareAndSwapL || + xop == OpCompareAndSwapP || + xop == OpCompareAndSwapN || + xop == OpCompareAndSwapI || + (!UseBiasedLocking && + ((xop == OpFastLock && vop == OpMemBarAcquire) || + (xop == OpFastUnlock && vop == OpMemBarRelease)))) { + return true; + } + + if (x->isLoad() || x->isStore() || x->isLoadStore() || + x->isCall() || x->isSafePoint() || x->isblockproj()) { + break; + } + } + return false; +} + //============================================================================= //---------------------------State--------------------------------------------- State::State(void) {
- Previous message: Redundant barrier elimination
- Next message: A simple optimization proposal
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the hotspot-compiler-dev mailing list