The missing guide to Dataflow Analysis in MLIR (original) (raw)

MLIR comes with the Dataflow Analysis Framework out of the box.
This article describes how it works, what’s the difference between Dense and Sparse analyses, and most importantly some pitfalls and gotchas you should be aware of before starting using it.

This section is a very brief recap of the dataflow framework.
For more detailed and certainly more correct explanation consider watching these lectures or diving even deeper with the SPA book (chapters 4 and 5).

The goal of a dataflow analysis is to collect some facts and information about the program, so that these facts can be used later for further analyses or transformations.

MLIR’s dataflow analysis is based on the classical monotone framework. Conceptually, it’s a very simple idea: run the analysis iteratively until no more new facts can be derived:

let analysis = DataflowAnalysis();
let worklist = ParseProgram(sourceCode);
while (!worklist.empty()) {
  let node = worklist.pop();
  let knownFacts = analysis.getFacts(node.predecessors());
  if (analysis.canDeriveNewFacts(node, knownFacts)) {
     let newFacts = analysis.deriveNewFacts(node);
     analysis.mergeFacts(newFacts, knownFacts);
     worklist.add(node.successors());
  }
}

The pseudocode above shows this idea in action: try deriving new facts for each node in the program, if new facts are indeed found, then update the existing information and add the successor nodes for further analysis.
The last step is crucial: the newly derived facts “invalidate” facts for the subsequent nodes, so some nodes might be analysed multiple times.

The invalidation part is important for at least two reasons:

While MLIR has a generic implementation of the monotone framework, it also comes with several predefined implementations: DenseForward, DenseBackward, SparseForward, and SparseBackward analyses. Whenever you want to build your own analysis, these are perhaps the ones you should consider as a starting point.

While the difference between Forward vs Backward is clear, the Dense vs Sparse might be confusing at first.

The main difference between these two flavours is the way the information or facts are propagated.

MLIR’s dataflow framework attaches facts about a program to the abstract LatticeAnchors. The abstract anchor can be either an mlir::ProgramPoint, an mlir::Value, or an mlir::GenericLatticeAnchor.

The ProgramPoint represents the location of a fact within the program, e.g. “before an operation“ or “after an operation.” Thus, the facts are propagated from one operation to another via (implicit) control-flow edges. This is the essence of the Dense Analysis.

The Value can be used to attach the facts to specific SSA-values. The facts are propagated from one value to another via data flow edges.

The generic anchor can be used to encode anything that doesn’t fit a ProgramPoint or a Value. For example, DeadCodeAnalysis uses it to represent a CFGEdge.

Below is an illustration of the difference between Dense and Sparse analyses for the following program:

func.func @do_stuff(%a: i32, %b: i32) -> i32 {
  %two = arith.constant 2 : i32
  %three = arith.constant 3 : i32
  %t1 = arith.muli %a, %two : i32
  %t2 = arith.muli %b, %three : i32
  %res = arith.addi %t1, %t2 : i32
  return %res : i32
}

In the case of the Dense forward analysis, any change of facts of the op %three triggers new analysis for all the ops “below” it.
In the case of the Sparse forward analysis though, the same change would only affect “half” of the program.

The APIs of both analyses reflect the lattice anchors:

// Dense Analysis
LogicalResult visitOperation(Operation *op,
                             const AnalysisState &before,
                             AnalysisState *after);

// Sparse Analysis
LogicalResult visitOperation(Operation *op,
                             ArrayRef<const AnalysisState*> before,
                             ArrayRef<AnalysisState *> after);

These are precisely the transfer/merge functions from the monotone framework, and it’s the analysis authors responsibility to implement them.

There is a very important detail one must keep in mind!

Looking at the documentation around the Dataflow Analysis Framework you can find the following lines at the top of both SparseAnalysis.h and DenseAnalysis.h (emphasis mine):

This file implements sparse data-flow analysis using the data-flow analysis framework. The analysis is forward and conditional and uses the results of dead code analysis to prune dead code during the analysis.

What the documentation doesn’t say however, is that by default everything is considered dead.
If you fail to include DeadCodeAnalysis in the solver, then all of the operations will be skipped as “dead”.
I observed similar behaviour due to the lack of SparseConstantPropagation as some of the nested regions (coming from scf) were ignored, even though they were in fact live and reachable.

At the time of writing (LLVM 20), this is the way to run your own analysis:

mlir::DataFlowSolver solver;

// These two are strictly required for the complete analysis to work
solver.load<mlir::dataflow::DeadCodeAnalysis>();
solver.load<mlir::dataflow::SparseConstantPropagation>();

solver.load<MyFancyAnalysis>();
solver.initializeAndRun(op);

This is the part you should be looking at very carefully, especially when upgrading to a newer version of LLVM/MLIR: at some point some new implicitly “required” analysis might be introduced, which may lead to an incomplete view of the program.

I’m still learning about the framework internals, so if you know of any examples of out-of-tree analyses - any pointers would be highly appreciated 🙌

I highly recommend watching these two talks to get a better idea of the motivation behind the current design and real-world use-cases.

Discussion about this post