Connecting op verifiers to analyses (original) (raw)

I have a project where the following situation happens repeatedly. There exists a rather expensive (e.g., polyhedral model) yet frequently used analysis that expects the ops it process to have passed the verifier. At the same time, the verification logic itself largely overlaps with the analysis, and is similarly expensive. I’d like to reuse the results of the expensive computation between the analysis and the verifier.

It looks like most of what I need can be achieved by giving the op verification hook access to the analysis manager. In this case, the verifier will get the analysis from the manager, and the following passes will be able to reuse it. If passes preserve the analysis, the verifier can reuse it again. This looks like mostly plumbing work, and is in theory sufficient for my use case.

A potentially more interesting scenario is a failable analysis, i.e., a situation where an analysis cannot be performed, which indicates a problem with the IR and should be reported by the verifier. It’s unclear to me whether we would like to support such things natively in the infrastructure or not.

Thoughts? Ideas?

I think this is an abuse of the verifier. The individual verify() hooks should only check simple (ideally local) properties that must always be held by operations. More complicated analyses (e.g. I was just fighting with IsolatedFromAbove checking) turn into N^2 compile time messes - recall that the verifier is invoked frequently.

Also, recall that the verifier is not a pass. It is used outside the pass manager in many places, e.g. by the .mlir parser.

I don’t know exactly what you’re looking to achieve with this analysis, but I’d encourage you to find a way to encode the information you need directly in the IR itself somehow. This allows you to persistent it, makes it easy to test the updating logic, etc.

-Chris

ftynse June 1, 2021, 4:35pm 3

You are right that I am checking non-local properties, although “local” is a flexible concept given MLIR’s nested nature. One could claim that checking relations between operations in the verifier of their common parent, e.g., a function, is “local” with respect to that function.

If this doesn’t belong to the verifier, where does it belong?

This is exactly why I want to cache the results rather than recompute them every time the verifier is invoked. Only a small subset of passes will affect them in my case.

There are actually very few places where the (operation) verifier is called: pass manager and IR construction (in-tree we have two cases - parser and translation from LLVM IR + one per example in toy and a couple of tests that I don’t count). Any project constructing MLIR presumably calls the verifier once it finished construction, but I don’t think there are other use cases. If it is running passes after constructing the IR, which it most likely does, it can just as well create the pass manager before running the verifier. This isn’t mandatory and the verifiers that choose to support it will also have to support the case of missing the analysis manager, and creating analyses themselves.

As for the use in the pass manager, verifier is indeed not a pass, but the call to mlir::verify is just next to the use of the analysis manager here - llvm-project/mlir/lib/Pass/Pass.cpp at 896f9bc350eba0baf17f1ceae7383d88f0ce2a85 · llvm/llvm-project · GitHub - it’s really just a matter of passing it through.

You can define your own pass. This is not atypical for dialects that have multiple phases anyway. For example, the FIRRTL dialect in CIRCT has several different internal phases “High, Mid, Low” which impose more invariants the further you lower. This means that the invariants the Low phase imposes (e.g. that all widths are inferred) are not checked by the verifier. We have a specific pass to do that.

SIL in Swift has the same thing going on, because some invariants are non-local (e.g. that all values are defined before use). It’s “definitive initialization” pass does the dataflow analysis pass to enforce this and rejects malformed IR with errors. Downstream passes assume this has been done. There are several other examples of this.

Right, I’m suggesting you cache them in the IR. You can do that in many ways, including adding a new node with a big attribute dictionary on it.

 alex.metadataholder {attributes that encode expensive and interesting things} {
    affine.for %arg0 ...
      affine.for %arg1 ...
      }
    }
  }

or whatever.

I feel like you’re looking at this in an empirical way (it is true that it is called in few places) which ignores the design intent that went into this. The verify() hook is a hook like fold(): it is intended to be usable by a number of different clients, it isn’t an implementation detail of a particular pass. Just like fold() isn’t an implementation detail of canonicalize.

-Chris

ftynse June 2, 2021, 8:14am 5

I have half a dozen analyses/verifications that depend on each other, and some of which are invalidated by some of the two dozen passes in the pipeline but not all passes. Unlike FIRRTL and SIL, all verifications must succeed every time, there are no requirements that get progressively imposed. I suppose I can put a “verification pass” before and after every “real” pass. That sounds like I’ll be replacing all of the verification with that, because having two different things to do verification increases test complexity and maintenance load. I would have hoped for better support from the infrastructure…

Are you suggesting the traditional polyhedral form? :wink:

Without going into the discussion on whether practical uses should override design intent, what do you see as use scenarios for this hook other than constructing IR and checking pass sanity?

A key example that is important to me are “graph building APIs” (both for hardware and ML).

Consider something like a Pytorch/numpy tracing implementation that builds a graph as user-defined code (e.g. in python) is executed. Each operation you stage creates an op. Being able to call the verifier on it as you go is incredibly valuable, because simple shape errors can be immediately reported to the user.

This isn’t a theoretical example, we have this exact use case on the hardware side, but instead of shape errors they are more complicated type constraints.

-Chris

I wonder if there is a middle-of-the road path we can take here analogous to the verifySymbolUses method on SymbolUserOpInterface, which needs to calculate some expensive-to-compute state (a collection of nested symbol tables) in order to run its verification hook.

ftynse June 14, 2021, 12:00pm 8

I don’t see anything that I would like to have as preventing this use case. You can still run the verifier manually at whatever place. If it is given access to a (non-null) analysis manager, it can use it, otherwise it should compute whatever it needs.

The bug (50701 – MLIR PassManager verifies too much) you have just filed curiously aligns with my line of thought, especially this part:

Passes are allowed to indicate that they don’t change the IR by signaling that they “preserve all
analyses”. When this happens, we shouldn’t run the verifier.

This sounds like the verifier should just be an analysis.

Hi Alex,

It depends on what you mean by “the verifier”. “The verifier” is at least three things:

  1. Low level verification hooks like AbstractOperation::verifyInvariants and verifyOperationAttribute that are implemented by dialects.
  2. The mlir::verify(Operation *op) that is the current entrypoint to the verifier. It applies those hooks to an operation and its regions, and checks additional properties about dominance etc.
  3. The passmanager which invokes the verifier after passes.

You’re right that there is some similarity to the third of these and an analysis pass, but I don’t think that is what you’re proposing. I thought you were proposing that the primitive hooks can require other analyses. This defeats clients of #1 (what I mentioned above) and #2 (e.g. things like the .mlir parser which aren’t part of the pass manager).

-Chris

clattner August 28, 2021, 4:41pm 10

Two months later… Alex is right. It would be really great to have something like this :-). Even things like “calls” that want to resolve their symbol to a callee would greatly benefit from having a symbol cache.

@ftynse have you pushed on this at all?

Calls should already be able to verify themselves through SymbolUserOpInterface.

ftynse August 31, 2021, 12:56pm 12

I had a simple prototype that threads AnalysisManager through the op verifier calls with no need to modify existing verifier hooks. Attributes and types should be equally straightforward. ⚙ D108991 WIP: thread AnalysisManager through verifier. This was solving my initial problem and there was no interest in pushing further. The interesting questions are:

  1. Do we want to create a fresh analysis manager in a simple mlir::verify call outside of pass pipelines?
  2. Which op should the analysis manager be rooted at? Currently, it’s the same as the pass manager, not sure if we want to root it there, at the current op, at the closest isolated-from-above parent or something else?
  3. Layeing: analysis manager is in Pass/, verifier is in IR/, so far this is transparent because we only pass around the pointer in the IR/ part; concrete ops will have to include Pass/AnalysisManager.h, which sounds awkward.

jpienaar August 31, 2021, 1:20pm 13

+1, everything with a symbol ref (incl for example function based control flow) should (also avoids TSAN issues 🙂)

I’d assume analysis manager would be optional. If not given you have to operate conservatively and with what info you have.

This is a tricky one as it doesn’t seem uniform for all ops - well unless you always give it the one the pass is rooted on and it can query a nested one (querying a parent of the op the pass is nested on wouldn’t be safe).

Only concrete ops that need to use the analysis manager for their verification would, others would just take it as an opaque pointer correct?

ftynse August 31, 2021, 3:22pm 14

Right. I was thinking that if the analysis manager is null and the verifier still wants the analysis, it can construct that analysis directly. This is costly if done every time the verifier runs, hence my original hope of connecting analyses and the verifier.

Ops that want to opt into using analysis manager need to redefine verifyAnalyzed, which accepts the manager, instead of verify. Ops that don’t need want this can keep using verify as before and not even know the pointer exists.

clattner September 21, 2021, 8:50pm 15

Just to follow up on this, SymbolUserOpInterface is exactly what I was missing. Thanks for the pointer Sean!

These are all really great questions and I’m not sure what the best path is. The thing I’d observe is that “IsolatedFromAbove” ops are the natural unit of parallelization and sharing in an MLIR region tree, so tying into that seems like it would make sense.

Maybe the verifier could ensure that the corresponding analyses are computed (and maintain the mapping of op → analysis) when walking down the region tree, and individual operations that query it would check the analysis manager. Operations would have to be written in a conservative way to handle “missing analysis information” which would be the result when you verify just an operation on its own without verifying the entire IR en-mass.

-Chris

ftynse January 27, 2023, 11:33am 16

Reviving this as it appears we may have another use case for data layout information in the verifier, specifically we want to only allow certain memory transfer operations with vectors if we know that the vectors are aligned properly in memory.