torch.fx — PyTorch 2.0 documentation (original) (raw)

Overview

FX is a toolkit for developers to use to transform nn.Moduleinstances. FX consists of three main components: a **symbolic tracer,**an intermediate representation, and Python code generation. A demonstration of these components in action:

import torch

Simple module for demonstration

class MyModule(torch.nn.Module): def init(self): super().init() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.linear = torch.nn.Linear(4, 5)

def forward(self, x):
    return self.linear(x + self.param).clamp(min=0.0, max=1.0)

module = MyModule()

from torch.fx import symbolic_trace

Symbolic tracing frontend - captures the semantics of the module

symbolic_traced : torch.fx.GraphModule = symbolic_trace(module)

High-level intermediate representation (IR) - Graph representation

print(symbolic_traced.graph) """ graph(): %x : [#users=1] = placeholder[target=x] %param : [#users=1] = get_attr[target=param] %add : [#users=1] = call_function[target=operator.add](args = (%x, %param), kwargs = {}) %linear : [#users=1] = call_module[target=linear](args = (%add,), kwargs = {}) %clamp : [#users=1] = call_method[target=clamp](args = (%linear,), kwargs = {min: 0.0, max: 1.0}) return clamp """

Code generation - valid Python code

print(symbolic_traced.code) """ def forward(self, x): param = self.param add = x + param; x = param = None linear = self.linear(add); add = None clamp = linear.clamp(min = 0.0, max = 1.0); linear = None return clamp """

The symbolic tracer performs “symbolic execution” of the Python code. It feeds fake values, called Proxies, through the code. Operations on theses Proxies are recorded. More information about symbolic tracing can be found in the symbolic_trace() and Tracerdocumentation.

The intermediate representation is the container for the operations that were recorded during symbolic tracing. It consists of a list of Nodes that represent function inputs, callsites (to functions, methods, or torch.nn.Module instances), and return values. More information about the IR can be found in the documentation for Graph. The IR is the format on which transformations are applied.

Python code generation is what makes FX a Python-to-Python (or Module-to-Module) transformation toolkit. For each Graph IR, we can create valid Python code matching the Graph’s semantics. This functionality is wrapped up in GraphModule, which is atorch.nn.Module instance that holds a Graph as well as aforward method generated from the Graph.

Taken together, this pipeline of components (symbolic tracing -> intermediate representation -> transforms -> Python code generation) constitutes the Python-to-Python transformation pipeline of FX. In addition, these components can be used separately. For example, symbolic tracing can be used in isolation to capture a form of the code for analysis (and not transformation) purposes. Code generation can be used for programmatically generating models, for example from a config file. There are many uses for FX!

Several example transformations can be found at theexamplesrepository.

Writing Transformations

What is an FX transform? Essentially, it’s a function that looks like this.

import torch import torch.fx

def transform(m: nn.Module, tracer_class : type = torch.fx.Tracer) -> torch.nn.Module: # Step 1: Acquire a Graph representing the code in m

# NOTE: torch.fx.symbolic_trace is a wrapper around a call to
# fx.Tracer.trace and constructing a GraphModule. We'll
# split that out in our transform to allow the caller to
# customize tracing behavior.
graph : torch.fx.Graph = tracer_class().trace(m)

# Step 2: Modify this Graph or create a new one
graph = ...

# Step 3: Construct a Module to return
return torch.fx.GraphModule(m, graph)

Your transform will take in a torch.nn.Module, acquire a Graphfrom it, do some modifications, and return a newtorch.nn.Module. You should think of the torch.nn.Module that your FX transform returns as identical to a regular torch.nn.Module – you can pass it to another FX transform, you can pass it to TorchScript, or you can run it. Ensuring that the inputs and outputs of your FX transform are atorch.nn.Module will allow for composability.

Note

It is also possible to modify an existing GraphModule instead of creating a new one, like so:

import torch import torch.fx

def transform(m : nn.Module) -> nn.Module: gm : torch.fx.GraphModule = torch.fx.symbolic_trace(m)

# Modify gm.graph
# <...>

# Recompile the forward() method of `gm` from its Graph
gm.recompile()

return gm

Note that you MUST call GraphModule.recompile() to bring the generatedforward() method on the GraphModule in sync with the modified Graph.

Given that you’ve passed in a torch.nn.Module that has been traced into aGraph, there are now two primary approaches you can take to building a newGraph.

A Quick Primer on Graphs

Full treatment of the semantics of graphs can be found in the Graphdocumentation, but we are going to cover the basics here. A Graph is a data structure that represents a method on a GraphModule. The information that this requires is:

All three of these concepts are represented with Node instances. Let’s see what we mean by that with a short example:

import torch import torch.fx

class MyModule(torch.nn.Module): def init(self): super().init() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.linear = torch.nn.Linear(4, 5)

def forward(self, x):
    return torch.topk(torch.sum(
        self.linear(x + self.linear.weight).relu(), dim=-1), 3)

m = MyModule() gm = torch.fx.symbolic_trace(m)

gm.graph.print_tabular()

Here we define a module MyModule for demonstration purposes, instantiate it, symbolically trace it, then call the Graph.print_tabular() method to print out a table showing the nodes of this Graph:

opcode name target args kwargs
placeholder x x () {}
get_attr linear_weight linear.weight () {}
call_function add_1 (x, linear_weight) {}
call_module linear_1 linear (add_1,) {}
call_method relu_1 relu (linear_1,) {}
call_function sum_1 <built-in method sum …> (relu_1,) {‘dim’: -1}
call_function topk_1 <built-in method topk …> (sum_1, 3) {}
output output output (topk_1,) {}

We can use this information to answer the questions we posed above.

Given that we now know the basics of how code is represented in FX, we can now explore how we would edit a Graph.

Graph Manipulation

Direct Graph Manipulation

One approach to building this new Graph is to directly manipulate your old one. To aid in this, we can simply take the Graph we obtain from symbolic tracing and modify it. For example, let’s say we desire to replacetorch.add() calls with torch.mul() calls.

import torch import torch.fx

Sample module

class M(torch.nn.Module): def forward(self, x, y): return torch.add(x, y)

def transform(m: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: graph : fx.Graph = tracer_class().trace(m) # FX represents its Graph as an ordered list of # nodes, so we can iterate through them. for node in graph.nodes: # Checks if we're calling a function (i.e: # torch.add) if node.op == 'call_function': # The target attribute is the function # that call_function calls. if node.target == torch.add: node.target = torch.mul

graph.lint() # Does some checks to make sure the
             # Graph is well-formed.

return fx.GraphModule(m, graph)

We can also do more involved Graph rewrites, such as deleting or appending nodes. To aid in these transformations, FX has utility functions for transforming the graph that can be found in the Graph documentation. An example of using these APIs to append a torch.relu() call can be found below.

Specifies the insertion point. Any nodes added to the

Graph within this scope will be inserted after node

with traced.graph.inserting_after(node): # Insert a new call_function node calling torch.relu new_node = traced.graph.call_function( torch.relu, args=(node,))

# We want all places that used the value of `node` to
# now use that value after the `relu` call we've added.
# We use the `replace_all_uses_with` API to do this.
node.replace_all_uses_with(new_node)

For simple transformations that only consist of substitutions, you can also make use of the subgraph rewriter.

Subgraph Rewriting With replace_pattern()

FX also provides another level of automation on top of direct graph manipulation. The replace_pattern() API is essentially a “find/replace” tool for editingGraphs. It allows you to specify a pattern and replacement function and it will trace through those functions, find instances of the group of operations in the pattern graph, and replace those instances with copies of the replacementgraph. This can help to greatly automate tedious graph manipulation code, which can get unwieldy as the transformations get more complex.

Graph Manipulation Examples

Proxy/Retracing

Another way of manipulating Graphs is by reusing the Proxymachinery used in symbolic tracing. For example, let’s imagine that we wanted to write a transformation that decomposed PyTorch functions into smaller operations. It would transform everyF.relu(x) call into (x > 0) * x. One possibility would be to perform the requisite graph rewriting to insert the comparison and multiplication after the F.relu, and then clean up the originalF.relu. However, we can automate this process by using Proxyobjects to automatically record operations into the Graph.

To use this method, we write the operations that we want inserted as regular PyTorch code and invoke that code with Proxy objects as arguments. These Proxy objects will capture the operations that are performed on them and append them to the Graph.

Note that this decomposition rule can be read as regular Python

def relu_decomposition(x): return (x > 0) * x

decomposition_rules = {} decomposition_rules[F.relu] = relu_decomposition

def decompose(model: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: """ Decompose model into smaller constituent operations. Currently,this only supports decomposing ReLU into its mathematical definition: (x > 0) * x """ graph : fx.Graph = tracer_class().trace(model) new_graph = fx.Graph() env = {} tracer = torch.fx.proxy.GraphAppendingTracer(new_graph) for node in graph.nodes: if node.op == 'call_function' and node.target in decomposition_rules: # By wrapping the arguments with proxies, # we can dispatch to the appropriate # decomposition rule and implicitly add it # to the Graph by symbolically tracing it. proxy_args = [ fx.Proxy(env[x.name], tracer) if isinstance(x, fx.Node) else x for x in node.args] output_proxy = decomposition_rulesnode.target

        # Operations on `Proxy` always yield new `Proxy`s, and the
        # return value of our decomposition rule is no exception.
        # We need to extract the underlying `Node` from the `Proxy`
        # to use it in subsequent iterations of this transform.
        new_node = output_proxy.node
        env[node.name] = new_node
    else:
        # Default case: we don't have a decomposition rule for this
        # node, so just copy the node over into the new graph.
        new_node = new_graph.node_copy(node, lambda x: env[x.name])
        env[node.name] = new_node
return fx.GraphModule(model, new_graph)

In addition to avoiding explicit graph manipulation, using Proxys also allows you to specify your rewrite rules as native Python code. For transformations that require a large amount of rewrite rules (such as vmap or grad), this can often improve readability and maintainability of the rules. Note that while calling Proxy we also passed a tracer pointing to the underlying variable graph. This is done so if in case the operations in graph are n-ary (e.g. add is a binary operator) the call to Proxy does not create multiple instances of a graph tracer which can lead to unexpected runtime errors. We recommend this method of using Proxy especially when the underlying operators can not be safely assumed to be unary.

A worked example of using Proxys for Graph manipulation can be foundhere.

The Interpreter Pattern

A useful code organizational pattern in FX is to loop over all the Nodes in a Graph and execute them. This can be used for several things including runtime analysis of values flowing through the graph or transformation of the code via retracing with Proxys. For example, suppose we want to run aGraphModule and record the torch.Tensor shape and dtype properties on the nodes as we see them at runtime. That might look like:

import torch import torch.fx from torch.fx.node import Node

from typing import Dict

class ShapeProp: """ Shape propagation. This class takes a GraphModule. Then, its propagate method executes the GraphModule node-by-node with the given arguments. As each operation executes, the ShapeProp class stores away the shape and element type for the output values of each operation on the shape and dtype attributes of the operation's Node. """ def init(self, mod): self.mod = mod self.graph = mod.graph self.modules = dict(self.mod.named_modules())

def propagate(self, *args):
    args_iter = iter(args)
    env : Dict[str, Node] = {}

    def load_arg(a):
        return torch.fx.graph.map_arg(a, lambda n: env[n.name])

    def fetch_attr(target : str):
        target_atoms = target.split('.')
        attr_itr = self.mod
        for i, atom in enumerate(target_atoms):
            if not hasattr(attr_itr, atom):
                raise RuntimeError(f"Node referenced nonexistant target {'.'.join(target_atoms[:i])}")
            attr_itr = getattr(attr_itr, atom)
        return attr_itr

    for node in self.graph.nodes:
        if node.op == 'placeholder':
            result = next(args_iter)
        elif node.op == 'get_attr':
            result = fetch_attr(node.target)
        elif node.op == 'call_function':
            result = node.target(*load_arg(node.args), **load_arg(node.kwargs))
        elif node.op == 'call_method':
            self_obj, *args = load_arg(node.args)
            kwargs = load_arg(node.kwargs)
            result = getattr(self_obj, node.target)(*args, **kwargs)
        elif node.op == 'call_module':
            result = self.modules[node.target](*load_arg(node.args), **load_arg(node.kwargs))

        # This is the only code specific to shape propagation.
        # you can delete this `if` branch and this becomes
        # a generic GraphModule interpreter.
        if isinstance(result, torch.Tensor):
            node.shape = result.shape
            node.dtype = result.dtype

        env[node.name] = result

    return load_arg(self.graph.result)

As you can see, a full interpreter for FX is not that complicated but it can be very useful. To ease using this pattern, we provide the Interpreter class, which encompasses the above logic in a way that certain aspects of the interpreter’s execution can be overridden via method overrides.

In addition to executing operations, we can also generate a newGraph by feeding Proxy values through an interpreter. Similarly, we provide the Transformer class to encompass this pattern. Transformer behaves similarly toInterpreter, but instead of calling the run method to get a concrete output value from the Module, you would call theTransformer.transform() method to return a newGraphModule which was subject to any transformation rules you installed as overridden methods.

Examples of the Interpreter Pattern

Debugging

Introduction

Often in the course of authoring transformations, our code will not be quite right. In this case, we may need to do some debugging. The key is to work backwards: first, check the results of invoking the generated module to prove or disprove correctness. Then, inspect and debug the generated code. Then, debug the process of transformations that led to the generated code.

If you’re not familiar with debuggers, please see the auxiliary sectionAvailable Debuggers.

Common Pitfalls in Transform Authoring

Checking Correctness of Modules

Because the output of most deep learning modules consists of floating point torch.Tensor instances, checking for equivalence between the results of two torch.nn.Module is not as straightforward as doing a simple equality check. To motivate this, let’s use an example:

import torch import torch.fx import torchvision.models as models

def transform(m : torch.nn.Module) -> torch.nn.Module: gm = torch.fx.symbolic_trace(m)

# Imagine we're doing some transforms here
# <...>

gm.recompile()

return gm

resnet18 = models.resnet18() transformed_resnet18 = transform(resnet18)

input_image = torch.randn(5, 3, 224, 224)

assert resnet18(input_image) == transformed_resnet18(input_image) """ RuntimeError: Boolean value of Tensor with more than one value is ambiguous """

Here, we’ve tried to check equality of the values of two deep learning models with the == equality operator. However, this is not well- defined both due to the issue of that operator returning a tensor and not a bool, but also because comparison of floating point values should use a margin of error (or epsilon) to account for the non-commutativity of floating point operations (seehere for more details). We can use torch.allclose() instead, which will give us an approximate comparison taking into account a relative and absolute tolerance threshold:

assert torch.allclose(resnet18(input_image), transformed_resnet18(input_image))

This is the first tool in our toolbox to check if transformed modules are behaving as we expect compared to a reference implementation.

Debugging the Generated Code

Because FX generates the forward() function on GraphModules, using traditional debugging techniques like print statements or pdb is not as straightforward. Luckily, we have several techniques we can use for debugging the generated code.

Use pdb

Invoke pdb to step into the running program. Although the code that represents the Graph is not in any source file, we can still step into it manually using pdb when the forward pass is invoked.

import torch import torch.fx import torchvision.models as models

def my_pass(inp: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: graph = tracer_class().trace(inp) # Transformation logic here # <...>

# Return new Module
return fx.GraphModule(inp, graph)

my_module = models.resnet18() my_module_transformed = my_pass(my_module)

input_value = torch.randn(5, 3, 224, 224)

When this line is executed at runtime, we will be dropped into an

interactive pdb prompt. We can use the step or s command to

step into the execution of the next line

import pdb; pdb.set_trace()

my_module_transformed(input_value)

If you’d like to run the same code multiple times, then it can be a bit tedious to step to the right code with pdb. In that case, one approach is to simply copy-paste the generated forward pass into your code and examine it from there.

Assume that traced is a GraphModule that has undergone some

number of transforms

Copy this code for later

print(traced)

Print the code generated from symbolic tracing. This outputs:

""" def forward(self, y): x = self.x add_1 = x + y; x = y = None return add_1 """

Subclass the original Module

class SubclassM(M): def init(self): super().init()

# Paste the generated `forward` function (the one we printed and
# copied above) here
def forward(self, y):
    x = self.x
    add_1 = x + y;  x = y = None
    return add_1

Create an instance of the original, untraced Module. Then, create an

instance of the Module with the copied forward function. We can

now compare the output of both the original and the traced version.

pre_trace = M() post_trace = SubclassM()

Use the to_folder Function From GraphModule

GraphModule.to_folder() is a method in GraphModule that allows you to dump out the generated FX code to a folder. Although copying the forward pass into the code often suffices as in Print the Generated Code, it may be easier to examine modules and parameters using to_folder.

m = symbolic_trace(M()) m.to_folder("foo", "Bar") from foo import Bar y = Bar()

After running the above example, we can then look at the code withinfoo/module.py and modify it as desired (e.g. adding printstatements or using pdb) to debug the generated code.

Debugging the Transformation

Now that we’ve identified that a transformation is creating incorrect code, it’s time to debug the transformation itself. First, we’ll check the Limitations of Symbolic Tracing section in the documentation. Once we verify that tracing is working as expected, the goal becomes figuring out what went wrong during our GraphModuletransformation. There may be a quick answer inWriting Transformations, but, if not, there are several ways to examine our traced module:

Sample Module

class M(torch.nn.Module): def forward(self, x, y): return x + y

Create an instance of M

m = M()

Symbolically trace an instance of M (returns a GraphModule). In

this example, we'll only be discussing how to inspect a

GraphModule, so we aren't showing any sample transforms for the

sake of brevity.

traced = symbolic_trace(m)

Print the code produced by tracing the module.

print(traced)

The generated forward function is:

""" def forward(self, x, y): add = x + y; x = y = None return add """

Print the internal Graph.

print(traced.graph)

This print-out returns:

""" graph(): %x : [#users=1] = placeholder[target=x] %y : [#users=1] = placeholder[target=y] %add : [#users=1] = call_function[target=operator.add](args = (%x, %y), kwargs = {}) return add """

Print a tabular representation of the internal Graph.

traced.graph.print_tabular()

This gives us:

""" opcode name target args kwargs


placeholder x x () {} placeholder y y () {} call_function add (x, y) {} output output output (add,) {} """

Using the utility functions above, we can compare our traced Module before and after we’ve applied our transformations. Sometimes, a simple visual comparison is enough to trace down a bug. If it’s still not clear what’s going wrong, a debugger like pdb can be a good next step.

Going off of the example above, consider the following code:

Sample user-defined function

def transform_graph(module: torch.nn.Module, tracer_class : type = fx.Tracer) -> torch.nn.Module: # Get the Graph from our traced Module g = tracer_class().trace(module)

"""
Transformations on `g` go here
"""

return fx.GraphModule(module, g)

Transform the Graph

transformed = transform_graph(traced)

Print the new code after our transforms. Check to see if it was

what we expected

print(transformed)

Using the above example, let’s say that the call to print(traced)showed us that there was an error in our transforms. We want to find what goes wrong using a debugger. We start a pdb session. We can see what’s happening during the transform by breaking ontransform_graph(traced), then pressing s to “step into” the call to transform_graph(traced).

We may also have good luck by editing the print_tabular method to print different attributes of the Nodes in the Graph. (For example, we might want to see the Node’s input_nodes and users.)

Available Debuggers

The most common Python debugger ispdb. You can start your program in “debug mode” with pdb by typingpython -m pdb FILENAME.py into the command line, where FILENAMEis the name of the file you want to debug. After that, you can use thepdb debugger commandsto move through your running program stepwise. It’s common to set a breakpoint (b LINE-NUMBER) when you start pdb, then call c to run the program until that point. This prevents you from having to step through each line of execution (using s or n) to get to the part of the code you want to examine. Alternatively, you can writeimport pdb; pdb.set_trace() before the line you want to break at. If you add pdb.set_trace(), your program will automatically start in debug mode when you run it. (In other words, you can just typepython FILENAME.py into the command line instead ofpython -m pdb FILENAME.py.) Once you’re running your file in debug mode, you can step through the code and examine your program’s internal state using certain commands. There are many excellent tutorials on pdb online, including RealPython’s“Python Debugging With Pdb”.

IDEs like PyCharm or VSCode usually have a debugger built in. In your IDE, you can choose to either a) use pdb by pulling up a terminal window in your IDE (e.g. View → Terminal in VSCode), or b) use the built-in debugger (usually a graphical wrapper around pdb).

Limitations of Symbolic Tracing

FX uses a system of symbolic tracing (a.k.a symbolic execution) to capture the semantics of programs in a transformable/analyzable form. The system is tracing in that it executes the program (really atorch.nn.Module or function) to record operations. It issymbolic in that the data flowing through the program during this execution is not real data, but rather symbols (Proxy in FX parlance).

Although symbolic tracing works for most neural net code, it has some limitations.

Dynamic Control Flow

The main limitation of symbolic tracing is it does not currently support_dynamic control flow_. That is, loops or if statements where the condition may depend on the input values of the program.

For example, let’s examine the following program:

def func_to_trace(x): if x.sum() > 0: return torch.relu(x) else: return torch.neg(x)

traced = torch.fx.symbolic_trace(func_to_trace) """ <...> File "dyn.py", line 6, in func_to_trace if x.sum() > 0: File "pytorch/torch/fx/proxy.py", line 155, in bool return self.tracer.to_bool(self) File "pytorch/torch/fx/proxy.py", line 85, in to_bool raise TraceError('symbolically traced variables cannot be used as inputs to control flow') torch.fx.proxy.TraceError: symbolically traced variables cannot be used as inputs to control flow """

The condition to the if statement relies on the value of x.sum(), which relies on the value of x, a function input. Sincex can change (i.e. if you pass a new input tensor to the traced function), this is dynamic control flow. The traceback walks back up through your code to show you where this situation happens.

Static Control Flow

On the other hand, so-called static control flow is supported. Static control flow is loops or if statements whose value cannot change across invocations. Typically, in PyTorch programs, this control flow arises for code making decisions about a model’s architecture based on hyper-parameters. As a concrete example:

import torch import torch.fx

class MyModule(torch.nn.Module): def init(self, do_activation : bool = False): super().init() self.do_activation = do_activation self.linear = torch.nn.Linear(512, 512)

def forward(self, x):
    x = self.linear(x)
    # This if-statement is so-called static control flow.
    # Its condition does not depend on any input values
    if self.do_activation:
        x = torch.relu(x)
    return x

without_activation = MyModule(do_activation=False) with_activation = MyModule(do_activation=True)

traced_without_activation = torch.fx.symbolic_trace(without_activation) print(traced_without_activation.code) """ def forward(self, x): linear_1 = self.linear(x); x = None return linear_1 """

traced_with_activation = torch.fx.symbolic_trace(with_activation) print(traced_with_activation.code) """ import torch def forward(self, x): linear_1 = self.linear(x); x = None relu_1 = torch.relu(linear_1); linear_1 = None return relu_1 """

The if-statement if self.do_activation does not depend on any function inputs, thus it is static. do_activation can be considered to be a hyper-parameter, and the traces of different instances ofMyModule with different values for that parameter have different code. This is a valid pattern that is supported by symbolic tracing.

Many instances of dynamic control flow are semantically static control flow. These instances can be made to support symbolic tracing by removing the data dependencies on input values, for example by moving values to Module attributes or by binding concrete values to arguments during symbolic tracing:

def f(x, flag): if flag: return x else: return x*2

fx.symbolic_trace(f) # Fails!

fx.symbolic_trace(f, concrete_args={'flag': True})

In the case of truly dynamic control flow, the sections of the program that contain this code can be traced as calls to the Method (seeCustomizing Tracing with the Tracer class) or function (seewrap()) rather than tracing through them.

Non-torch Functions

FX uses __torch_function__ as the mechanism by which it intercepts calls (see the technical overviewfor more information about this). Some functions, such as builtin Python functions or those in the math module, are not covered by__torch_function__, but we would still like to capture them in symbolic tracing. For example:

import torch import torch.fx from math import sqrt

def normalize(x): """ Normalize x by the size of the batch dimension """ return x / sqrt(len(x))

It's valid Python code

normalize(torch.rand(3, 4))

traced = torch.fx.symbolic_trace(normalize) """ <...> File "sqrt.py", line 9, in normalize return x / sqrt(len(x)) File "pytorch/torch/fx/proxy.py", line 161, in len raise RuntimeError("'len' is not supported in symbolic tracing by default. If you want " RuntimeError: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope """

The error tells us that the built-in function len is not supported. We can make it so that functions like this are recorded in the trace as direct calls using the wrap() API:

torch.fx.wrap('len') torch.fx.wrap('sqrt')

traced = torch.fx.symbolic_trace(normalize)

print(traced.code) """ import math def forward(self, x): len_1 = len(x) sqrt_1 = math.sqrt(len_1); len_1 = None truediv = x / sqrt_1; x = sqrt_1 = None return truediv """

Customizing Tracing with the Tracer class

The Tracer class is the class that underlies the implementation of symbolic_trace. The behavior of tracing can be customized by subclassing Tracer, like so:

class MyCustomTracer(torch.fx.Tracer): # Inside here you can override various methods # to customize tracing. See the Tracer API # reference pass

Let's use this custom tracer to trace through this module

class MyModule(torch.nn.Module): def forward(self, x): return torch.relu(x) + torch.ones(3, 4)

mod = MyModule()

traced_graph = MyCustomTracer().trace(mod)

trace() returns a Graph. Let's wrap it up in a

GraphModule to make it runnable

traced = torch.fx.GraphModule(mod, traced_graph)

Leaf Modules

Leaf Modules are the modules that appear as calls in the symbolic trace rather than being traced through. The default set of leaf modules is the set of standard torch.nn module instances. For example:

class MySpecialSubmodule(torch.nn.Module): def forward(self, x): return torch.neg(x)

class MyModule(torch.nn.Module): def init(self): super().init() self.linear = torch.nn.Linear(3, 4) self.submod = MySpecialSubmodule()

def forward(self, x):
    return self.submod(self.linear(x))

traced = torch.fx.symbolic_trace(MyModule()) print(traced.code)

linear is preserved as a call, yet submod is traced though.

This is because the default set of "Leaf Modules" includes all

standard torch.nn modules.

""" import torch def forward(self, x): linear_1 = self.linear(x); x = None neg_1 = torch.neg(linear_1); linear_1 = None return neg_1 """

The set of leaf modules can be customized by overridingTracer.is_leaf_module().

Miscellanea

API Reference

torch.fx.symbolic_trace(root, concrete_args=None)[source]

Symbolic tracing API

Given an nn.Module or function instance root, this function will return a GraphModuleconstructed by recording operations seen while tracing through root.

concrete_args allows you to partially specialize your function, whether it’s to remove control flow or data structures.

For example:

def f(a, b): if b == True: return a else: return a*2

FX can typically not trace through this due to the presence of control flow. However, we can use concrete_args to specialize on the value ofb to trace through this:

f = fx.symbolic_trace(f, concrete_args={'b': False}) assert f(3, False) == 6

Note that although you can still pass in different values of b, they will be ignored.

We can also use concrete_args to eliminate data-structure handling from our function. This will use pytrees to flatten your input. To avoid overspecializing, pass in fx.PH for values that shouldn’t be specialized. For example:

def f(x): out = 0 for v in x.values(): out += v return out f = fx.symbolic_trace(f, concrete_args={'x': {'a': fx.PH, 'b': fx.PH, 'c': fx.PH}}) assert f({'a': 1, 'b': 2, 'c': 4}) == 7

Parameters:

Returns:

a Module created from the recorded operations from root.

Return type:

GraphModule

Note

Backwards-compatibility for this API is guaranteed.

torch.fx.wrap(fn_or_name)[source]

This function can be called at module-level scope to register fn_or_name as a “leaf function”. A “leaf function” will be preserved as a CallFunction node in the FX trace instead of being traced through:

foo/bar/baz.py

def my_custom_function(x, y): return x * x + y * y

torch.fx.wrap('my_custom_function')

def fn_to_be_traced(x, y): # When symbolic tracing, the below call to my_custom_function will be inserted into # the graph rather than tracing it. return my_custom_function(x, y)

This function can also equivalently be used as a decorator:

foo/bar/baz.py

@torch.fx.wrap def my_custom_function(x, y): return x * x + y * y

A wrapped function can be thought of a “leaf function”, analogous to the concept of “leaf modules”, that is, they are functions that are left as calls in the FX trace rather than traced through.

Parameters:

fn_or_name (Union [_str,_ Callable ]) – The function or name of the global function to insert into the graph when it’s called

Note

Backwards-compatibility for this API is guaranteed.

class torch.fx.GraphModule(*args, **kwargs)[source]

GraphModule is an nn.Module generated from an fx.Graph. Graphmodule has agraph attribute, as well as code and forward attributes generated from that graph.

Warning

When graph is reassigned, code and forward will be automatically regenerated. However, if you edit the contents of the graph without reassigning the graph attribute itself, you must call recompile() to update the generated code.

Note

Backwards-compatibility for this API is guaranteed.

__init__(root, graph, class_name='GraphModule')[source]

Construct a GraphModule.

Parameters:

Note

Backwards-compatibility for this API is guaranteed.

add_submodule(target, m)[source]

Adds the given submodule to self.

This installs empty Modules where none exist yet if they are subpaths of target.

Parameters:

Returns:

Whether or not the submodule could be inserted. For

this method to return True, each object in the chain denoted by target must either a) not exist yet, or b) reference an nn.Module (not a parameter or other attribute)

Return type:

bool

Note

Backwards-compatibility for this API is guaranteed.

property code_: str_

Return the Python code generated from the Graph underlying thisGraphModule.

delete_all_unused_submodules()[source]

Deletes all unused submodules from self.

A Module is considered “used” if any one of the following is true: 1. It has children that are used 2. Its forward is called directly via a call_module node 3. It has a non-Module attribute that is used from aget_attr node

This method can be called to clean up an nn.Module without manually calling delete_submodule on each unused submodule.

Note

Backwards-compatibility for this API is guaranteed.

delete_submodule(target)[source]

Deletes the given submodule from self.

The module will not be deleted if target is not a valid target.

Parameters:

target (str) – The fully-qualified string name of the new submodule (See example in nn.Module.get_submodule for how to specify a fully-qualified string.)

Returns:

Whether or not the target string referenced a

submodule we want to delete. A return value of Falsemeans that the target was not a valid reference to a submodule.

Return type:

bool

Note

Backwards-compatibility for this API is guaranteed.

property graph_: Graph_

Return the Graph underlying this GraphModule

print_readable(print_output=True)[source]

Return the Python code generated for current GraphModule and its children GraphModules

Warning

This API is experimental and is NOT backward-compatible.

recompile()[source]

Recompile this GraphModule from its graph attribute. This should be called after editing the contained graph, otherwise the generated code of this GraphModule will be out of date.

Note

Backwards-compatibility for this API is guaranteed.

Return type:

PythonCode

to_folder(folder, module_name='FxModule')[source]

Dumps out module to folder with module_name so that it can be

imported with from <folder> import <module_name>

Args:

folder (Union[str, os.PathLike]): The folder to write the code out to

module_name (str): Top-level name to use for the Module while

writing out the code

Warning

This API is experimental and is NOT backward-compatible.

class torch.fx.Graph(owning_module=None, tracer_cls=None, tracer_extras=None)[source]

Graph is the main data structure used in the FX Intermediate Representation. It consists of a series of Node s, each representing callsites (or other syntactic constructs). The list of Node s, taken together, constitute a valid Python function.

For example, the following code

import torch import torch.fx

class MyModule(torch.nn.Module): def init(self): super().init() self.param = torch.nn.Parameter(torch.rand(3, 4)) self.linear = torch.nn.Linear(4, 5)

def forward(self, x):
    return torch.topk(torch.sum(self.linear(x + self.linear.weight).relu(), dim=-1), 3)

m = MyModule() gm = torch.fx.symbolic_trace(m)

Will produce the following Graph:

graph(x): %linear_weight : [#users=1] = self.linear.weight %add_1 : [#users=1] = call_function[target=operator.add](args = (%x, %linear_weight), kwargs = {}) %linear_1 : [#users=1] = call_module[target=linear](args = (%add_1,), kwargs = {}) %relu_1 : [#users=1] = call_method[target=relu](args = (%linear_1,), kwargs = {}) %sum_1 : [#users=1] = call_function[target=torch.sum](args = (%relu_1,), kwargs = {dim: -1}) %topk_1 : [#users=1] = call_function[target=torch.topk](args = (%sum_1, 3), kwargs = {}) return topk_1

For the semantics of operations represented in the Graph, please see Node.

Note

Backwards-compatibility for this API is guaranteed.

__init__(owning_module=None, tracer_cls=None, tracer_extras=None)[source]

Construct an empty Graph.

Note

Backwards-compatibility for this API is guaranteed.

call_function(the_function, args=None, kwargs=None, type_expr=None)[source]

Insert a call_function Node into the Graph. A call_function node represents a call to a Python callable, specified by the_function.

Parameters:

Returns:

The newly created and inserted call_function node.

Return type:

Node

Note

The same insertion point and type expression rules apply for this method as Graph.create_node().

Note

Backwards-compatibility for this API is guaranteed.

call_method(method_name, args=None, kwargs=None, type_expr=None)[source]

Insert a call_method Node into the Graph. A call_method node represents a call to a given method on the 0th element of args.

Parameters:

Returns:

The newly created and inserted call_method node.

Return type:

Node

Note

The same insertion point and type expression rules apply for this method as Graph.create_node().

Note

Backwards-compatibility for this API is guaranteed.

call_module(module_name, args=None, kwargs=None, type_expr=None)[source]

Insert a call_module Node into the Graph. A call_module node represents a call to the forward() function of a Module in the Modulehierarchy.

Parameters:

Returns:

The newly-created and inserted call_module node.

Return type:

Node

Note

The same insertion point and type expression rules apply for this method as Graph.create_node().

Note

Backwards-compatibility for this API is guaranteed.

create_node(op, target, args=None, kwargs=None, name=None, type_expr=None)[source]

Create a Node and add it to the Graph at the current insert-point. Note that the current insert-point can be set via Graph.inserting_before()and Graph.inserting_after().

Parameters:

Returns:

The newly-created and inserted node.

Return type:

Node

Note

Backwards-compatibility for this API is guaranteed.

eliminate_dead_code()[source]

Remove all dead code from the graph, based on each node’s number of users, and whether the nodes have any side effects. The graph must be topologically sorted before calling.

Returns:

Whether the graph was changed as a result of the pass.

Return type:

bool

Example:

Before dead code is eliminated, a from a = x + 1 below has no users and thus can be eliminated from the graph without having an effect.

def forward(self, x): a = x + 1 return x + self.attr_1

After dead code is eliminated, a = x + 1 has been removed, and the rest of forward remains.

def forward(self, x): return x + self.attr_1

Warning

Dead code elimination has some heuristics to avoid removing side-effectful nodes (see Node.is_impure) but in general coverage is very bad, so you should assume that this method is not sound to call unless you know that your FX graph consists entirely of functional operations.

Note

Backwards-compatibility for this API is guaranteed.

erase_node(to_erase)[source]

Erases a Node from the Graph. Throws an exception if there are still users of that node in the Graph.

Parameters:

to_erase (Node) – The Node to erase from the Graph.

Note

Backwards-compatibility for this API is guaranteed.

get_attr(qualified_name, type_expr=None)[source]

Insert a get_attr node into the Graph. A get_attr Node represents the fetch of an attribute from the Module hierarchy.

Parameters:

Returns:

The newly-created and inserted get_attr node.

Return type:

Node

Note

The same insertion point and type expression rules apply for this method as Graph.create_node.

Note

Backwards-compatibility for this API is guaranteed.

graph_copy(g, val_map, return_output_node=False)[source]

Copy all nodes from a given graph into self.

Parameters:

Returns:

The value in self that is now equivalent to the output value in g, if g had an output node. None otherwise.

Return type:

Optional[Union[Tuple[Any, …], List[Any], Dict[str, Any], slice, range, Node, str, int, float, bool, complex, dtype, Tensor, device, memory_format, layout]]

Note

Backwards-compatibility for this API is guaranteed.

inserting_after(n=None)[source]

Set the point at which create_node and companion methods will insert into the graph.

When used within a ‘with’ statement, this will temporary set the insert point and then restore it when the with statement exits:

with g.inserting_after(n): ... # inserting after node n ... # insert point restored to what it was previously g.inserting_after(n) # set the insert point permanently

Args:

n (Optional[Node]): The node before which to insert. If None this will insert after

the beginning of the entire graph.

Returns:

A resource manager that will restore the insert point on __exit__.

Note

Backwards-compatibility for this API is guaranteed.

inserting_before(n=None)[source]

Set the point at which create_node and companion methods will insert into the graph.

When used within a ‘with’ statement, this will temporary set the insert point and then restore it when the with statement exits:

with g.inserting_before(n): ... # inserting before node n ... # insert point restored to what it was previously g.inserting_before(n) # set the insert point permanently

Args:

n (Optional[Node]): The node before which to insert. If None this will insert before

the beginning of the entire graph.

Returns:

A resource manager that will restore the insert point on __exit__.

Note

Backwards-compatibility for this API is guaranteed.

lint()[source]

Runs various checks on this Graph to make sure it is well-formed. In particular: - Checks Nodes have correct ownership (owned by this graph) - Checks Nodes appear in topological order - If this Graph has an owning GraphModule, checks that targets exist in that GraphModule

Note

Backwards-compatibility for this API is guaranteed.

node_copy(node, arg_transform=<function Graph.>)[source]

Copy a node from one graph into another. arg_transform needs to transform arguments from the graph of node to the graph of self. Example:

Copying all the nodes in g into new_graph

g : torch.fx.Graph = ... new_graph = torch.fx.graph() value_remap = {} for node in g.nodes: value_remap[node] = new_graph.node_copy(node, lambda n : value_remap[n])

Parameters:

Return type:

Node

Note

Backwards-compatibility for this API is guaranteed.

property nodes_: _node_list_

Get the list of Nodes that constitute this Graph.

Note that this Node list representation is a doubly-linked list. Mutations during iteration (e.g. delete a Node, add a Node) are safe.

Returns:

A doubly-linked list of Nodes. Note that reversed can be called on this list to switch iteration order.

on_generate_code(make_transformer)[source]

Register a transformer function when python code is generated

Args:

make_transformer (Callable[[Optional[TransformCodeFunc]], TransformCodeFunc]):

a function that returns a code transformer to be registered. This function is called by on_generate_code to obtain the code transformer.

This function is also given as its input the currently registered code transformer (or None if nothing is registered), in case it is not desirable to overwrite it. This is useful to chain code transformers together.

Returns:

a context manager that when used in a with statement, to automatically restore the previously registered code transformer.

Example:

gm: fx.GraphModule = ...

This is a code transformer we want to register. This code

transformer prepends a pdb import and trace statement at the very

beginning of the generated torch.fx code to allow for manual

debugging with the PDB library.

def insert_pdb(body): return ["import pdb; pdb.set_trace()\n", *body]

Registers insert_pdb, and overwrites the current registered

code transformer (given by _ to the lambda):

gm.graph.on_generate_code( lambda _: insert_pdb )

Or alternatively, registers a code transformer which first

runs body through existing registered transformer, then

through insert_pdb:

gm.graph.on_generate_code( lambda current_trans: ( lambda body: insert_pdb( current_trans(body) if current_trans else body ) ) )

gm.recompile() gm(*inputs) # drops into pdb

This function can also be used as a context manager, with the benefit to automatically restores the previously registered code transformer:

... continue from previous example

with gm.graph.on_generate_code(lambda _: insert_pdb):

do more stuff with gm...

gm.recompile() gm(*inputs) # drops into pdb

now previous code transformer is restored (but gm's code with pdb

remains - that means you can run gm with pdb here too, until you

run next recompile()).

Warning

This API is experimental and is NOT backward-compatible.

output(result, type_expr=None)[source]

Insert an output Node into the Graph. An output node represents a return statement in Python code. result is the value that should be returned.

Parameters:

Note

The same insertion point and type expression rules apply for this method as Graph.create_node.

Note

Backwards-compatibility for this API is guaranteed.

placeholder(name, type_expr=None, default_value)[source]

Insert a placeholder node into the Graph. A placeholder represents a function input.

Parameters:

Return type:

Node

Note

The same insertion point and type expression rules apply for this method as Graph.create_node.

Note

Backwards-compatibility for this API is guaranteed.

print_tabular()[source]

Prints the intermediate representation of the graph in tabular format. Note that this API requires the tabulate module to be installed.

Note

Backwards-compatibility for this API is guaranteed.

process_inputs(*args)[source]

Processes args so that they can be passed to the FX graph.

Warning

This API is experimental and is NOT backward-compatible.

process_outputs(out)[source]

Warning

This API is experimental and is NOT backward-compatible.

python_code(root_module, *, verbose=False)[source]

Turn this Graph into valid Python code.

Parameters:

root_module (str) – The name of the root module on which to look-up qualified name targets. This is usually ‘self’.

Returns:

src: the Python source code representing the object globals: a dictionary of global names in src -> the objects that they reference.

Return type:

A PythonCode object, consisting of two fields

Note

Backwards-compatibility for this API is guaranteed.

set_codegen(codegen)[source]

Warning

This API is experimental and is NOT backward-compatible.

class torch.fx.Node(graph, name, op, target, args, kwargs, return_type=None)[source]

Node is the data structure that represents individual operations within a Graph. For the most part, Nodes represent callsites to various entities, such as operators, methods, and Modules (some exceptions include nodes that specify function inputs and outputs). Each Node has a function specified by its op property. The Node semantics for each value of op are as follows:

Note

Backwards-compatibility for this API is guaranteed.

property all_input_nodes_: List[Node]_

Return all Nodes that are inputs to this Node. This is equivalent to iterating over args and kwargs and only collecting the values that are Nodes.

Returns:

List of Nodes that appear in the args and kwargs of thisNode, in that order.

append(x)[source]

Insert x after this node in the list of nodes in the graph. Equivalent to self.next.prepend(x)

Parameters:

x (Node) – The node to put after this node. Must be a member of the same graph.

Note

Backwards-compatibility for this API is guaranteed.

property args_: Tuple[Optional[Union[Tuple[Any, ...], List[Any], Dict[str, Any], slice, range, Node, str, int, float, bool, complex, dtype, Tensor, device, memory_format, layout]], ...]_

The tuple of arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information.

Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.

format_node(placeholder_names=None, maybe_return_typename=None)[source]

Return a descriptive string representation of self.

This method can be used with no arguments as a debugging utility.

This function is also used internally in the __str__ method of Graph. Together, the strings in placeholder_namesand maybe_return_typename make up the signature of the autogenerated forward function in this Graph’s surrounding GraphModule. placeholder_names and maybe_return_typenameshould not be used otherwise.

Parameters:

Returns:

If 1) we’re using format_node as an internal helper

in the __str__ method of Graph, and 2) selfis a placeholder Node, return None. Otherwise, return a descriptive string representation of the current Node.

Return type:

str

Note

Backwards-compatibility for this API is guaranteed.

is_impure()[source]

Returns whether this op is impure, i.e. if its op is a placeholder or output, or if a call_function or call_module which is impure.

Returns:

If the op is impure or not.

Return type:

bool

Warning

This API is experimental and is NOT backward-compatible.

property kwargs_: Dict[str, Optional[Union[Tuple[Any, ...], List[Any], Dict[str, Any], slice, range, Node, str, int, float, bool, complex, dtype, Tensor, device, memory_format, layout]]]_

The dict of keyword arguments to this Node. The interpretation of arguments depends on the node’s opcode. See the Node docstring for more information.

Assignment to this property is allowed. All accounting of uses and users is updated automatically on assignment.

property next_: Node_

Returns the next Node in the linked list of Nodes.

Returns:

The next Node in the linked list of Nodes.

normalized_arguments(root, arg_types=None, kwarg_types=None, normalize_to_only_use_kwargs=False)[source]

Returns normalized arguments to Python targets. This means thatargs/kwargs will be matched up to the module/functional’s signature and return exclusively kwargs in positional order if normalize_to_only_use_kwargs is true. Also populates default values. Does not support positional-only parameters or varargs parameters.

Supports module calls.

May require arg_types and kwarg_types in order to disambiguate overloads.

Parameters:

Returns:

Returns NamedTuple ArgsKwargsPair, or None if not successful.

Return type:

Optional[_ArgsKwargsPair_]

Warning

This API is experimental and is NOT backward-compatible.

prepend(x)[source]

Insert x before this node in the list of nodes in the graph. Example:

Before: p -> self bx -> x -> ax After: p -> x -> self bx -> ax

Parameters:

x (Node) – The node to put before this node. Must be a member of the same graph.

Note

Backwards-compatibility for this API is guaranteed.

property prev_: Node_

Returns the previous Node in the linked list of Nodes.

Returns:

The previous Node in the linked list of Nodes.

replace_all_uses_with(replace_with, delete_user_cb=<function Node.>, *, propagate_meta=False)[source]

Replace all uses of self in the Graph with the Node replace_with.

Parameters:

Returns:

The list of Nodes on which this change was made.

Return type:

List[Node]

Note

Backwards-compatibility for this API is guaranteed.

replace_input_with(old_input, new_input)[source]

Loop through input nodes of self, and replace all instances ofold_input with new_input.

Parameters:

Note

Backwards-compatibility for this API is guaranteed.

property stack_trace_: Optional[str]_

Return the Python stack trace that was recorded during tracing, if any. This property is usually populated by Tracer.create_proxy. To record stack traces during tracing for debug purposes, setrecord_stack_traces = True on the Tracer instance.

update_arg(idx, arg)[source]

Update an existing positional argument to contain the new valuearg. After calling, self.args[idx] == arg.

Parameters:

Note

Backwards-compatibility for this API is guaranteed.

update_kwarg(key, arg)[source]

Update an existing keyword argument to contain the new valuearg. After calling, self.kwargs[key] == arg.

Parameters:

Note

Backwards-compatibility for this API is guaranteed.

class torch.fx.Tracer(autowrap_modules=(math,), autowrap_functions=())[source]

Tracer is the class that implements the symbolic tracing functionality of torch.fx.symbolic_trace. A call to symbolic_trace(m) is equivalent to Tracer().trace(m).

Tracer can be subclassed to override various behaviors of the tracing process. The different behaviors that can be overridden are described in the docstrings of the methods on this class.

Note

Backwards-compatibility for this API is guaranteed.

call_module(m, forward, args, kwargs)[source]

Method that specifies the behavior of this Tracer when it encounters a call to an nn.Module instance.

By default, the behavior is to check if the called module is a leaf module via is_leaf_module. If it is, emit a call_module node referring tom in the Graph. Otherwise, call the Module normally, tracing through the operations in its forward function.

This method can be overridden to–for example–create nested traced GraphModules, or any other behavior you would want while tracing acrossModule boundaries.

Parameters:

Returns:

The return value from the Module call. In the case that a call_modulenode was emitted, this is a Proxy value. Otherwise, it is whatever value was returned from the Module invocation.

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

create_arg(a)[source]

A method to specify the behavior of tracing when preparing values to be used as arguments to nodes in the Graph.

By default, the behavior includes:

  1. Iterate through collection types (e.g. tuple, list, dict) and recursively call create_args on the elements.
  2. Given a Proxy object, return a reference to the underlying IR Node
  3. Given a non-Proxy Tensor object, emit IR for various cases:
    • For a Parameter, emit a get_attr node referring to that Parameter
    • For a non-Parameter Tensor, store the Tensor away in a special attribute referring to that attribute.

This method can be overridden to support more types.

Parameters:

a (Any) – The value to be emitted as an Argument in the Graph.

Returns:

The value a converted into the appropriate Argument

Return type:

Optional[Union[Tuple[Any, …], List[Any], Dict[str, Any], slice, range, Node, str, int, float, bool, complex, dtype, Tensor, device, memory_format, layout]]

Note

Backwards-compatibility for this API is guaranteed.

create_args_for_root(root_fn, is_module, concrete_args=None)[source]

Create placeholder nodes corresponding to the signature of the rootModule. This method introspects root’s signature and emits those nodes accordingly, also supporting *args and **kwargs.

Warning

This API is experimental and is NOT backward-compatible.

create_node(kind, target, args, kwargs, name=None, type_expr=None)

Inserts a graph node given target, args, kwargs, and name.

This method can be overridden to do extra checking, validation, or modification of values used in node creation. For example, one might want to disallow in-place operations from being recorded.

Note

Backwards-compatibility for this API is guaranteed.

Return type:

Node

create_proxy(kind, target, args, kwargs, name=None, type_expr=None, proxy_factory_fn=None)

Create a Node from the given arguments, then return the Node wrapped in a Proxy object.

If kind = ‘placeholder’, then we’re creating a Node that represents the parameter of a function. If we need to encode a default parameter, we use the args tuple. args is otherwise empty for placeholder Nodes.

Note

Backwards-compatibility for this API is guaranteed.

getattr(attr, attr_val, parameter_proxy_cache)[source]

Method that specifies the behavior of this Tracer when we call getattr on a call to an nn.Module instance.

By default, the behavior is to return a proxy value for the attribute. It also stores the proxy value in the parameter_proxy_cache, so that future calls will reuse the proxy rather than creating a new one.

This method can be overridden to –for example– not return proxies when querying parameters.

Parameters:

Returns:

The return value from the getattr call.

Warning

This API is experimental and is NOT backward-compatible.

is_leaf_module(m, module_qualified_name)[source]

A method to specify whether a given nn.Module is a “leaf” module.

Leaf modules are the atomic units that appear in the IR, referenced by call_module calls. By default, Modules in the PyTorch standard library namespace (torch.nn) are leaf modules. All other modules are traced through and their constituent ops are recorded, unless specified otherwise via this parameter.

Parameters:

Return type:

bool

Note

Backwards-compatibility for this API is guaranteed.

iter(obj)

Called when a proxy object is being iterated over, such as

when used in control flow. Normally we don’t know what to do because we don’t know the value of the proxy, but a custom tracer can attach more information to the graph node using create_node and can choose to return an iterator.

Note

Backwards-compatibility for this API is guaranteed.

Return type:

Iterator

keys(obj)

Called when a proxy object is has the keys() method called.

This is what happens when ** is called on a proxy. This should return an iterator it ** is suppose to work in your custom tracer.

Note

Backwards-compatibility for this API is guaranteed.

Return type:

Any

path_of_module(mod)[source]

Helper method to find the qualified name of mod in the Module hierarchy of root. For example, if root has a submodule named foo, which has a submodule named bar, passing bar into this function will return the string “foo.bar”.

Parameters:

mod (str) – The Module to retrieve the qualified name for.

Return type:

str

Note

Backwards-compatibility for this API is guaranteed.

proxy(node)

Note

Backwards-compatibility for this API is guaranteed.

Return type:

Proxy

to_bool(obj)

Called when a proxy object is being converted to a boolean, such as

when used in control flow. Normally we don’t know what to do because we don’t know the value of the proxy, but a custom tracer can attach more information to the graph node using create_node and can choose to return a value.

Note

Backwards-compatibility for this API is guaranteed.

Return type:

bool

trace(root, concrete_args=None)[source]

Trace root and return the corresponding FX Graph representation. rootcan either be an nn.Module instance or a Python callable.

Note that after this call, self.root may be different from the root passed in here. For example, when a free function is passed to trace(), we will create an nn.Module instance to use as the root and add embedded constants to.

Parameters:

Returns:

A Graph representing the semantics of the passed-in root.

Return type:

Graph

Note

Backwards-compatibility for this API is guaranteed.

class torch.fx.Proxy(node, tracer=None)[source]

Proxy objects are Node wrappers that flow through the program during symbolic tracing and record all the operations (torch function calls, method calls, operators) that they touch into the growing FX Graph.

If you’re doing graph transforms, you can wrap your own Proxymethod around a raw Node so that you can use the overloaded operators to add additional things to a Graph.

Proxy objects cannot be iterated. In other words, the symbolic tracer will throw an error if a Proxy is used in a loop or as an *args/**kwargs function argument.

There are two main ways around this: 1. Factor out the untraceable logic into a top-level function and use fx.wrap on it. 2. If the control flow is static (i.e. the loop trip count is based on some hyperparameter), the code can be kept in its original position and refactored into something like:

for i in range(self.some_hyperparameter): indexed_item = proxied_value[i]

For a more detailed description into the Proxy internals, check out the “Proxy” section in torch/fx/OVERVIEW.md

Note

Backwards-compatibility for this API is guaranteed.

class torch.fx.Interpreter(module, garbage_collect_values=True)[source]

An Interpreter executes an FX graph Node-by-Node. This pattern can be useful for many things, including writing code transformations as well as analysis passes.

Methods in the Interpreter class can be overridden to customize the behavior of execution. The map of overrideable methods in terms of call hierarchy:

run() +-- run_node +-- placeholder() +-- get_attr() +-- call_function() +-- call_method() +-- call_module() +-- output()

Example

Suppose we want to swap all instances of torch.neg withtorch.sigmoid and vice versa (including their Tensormethod equivalents). We could subclass Interpreter like so:

class NegSigmSwapInterpreter(Interpreter): def call_function(self, target : Target, args : Tuple, kwargs : Dict) -> Any: if target == torch.sigmoid: return torch.neg(*args, **kwargs) return super().call_function(n)

def call_method(self, target : Target,
                args : Tuple, kwargs : Dict) -> Any:
    if target == 'neg':
        call_self, *args_tail = args
        return call_self.sigmoid(*args_tail, **kwargs)
    return super().call_method(n)

def fn(x): return torch.sigmoid(x).neg()

gm = torch.fx.symbolic_trace(fn) input = torch.randn(3, 4) result = NegSigmSwapInterpreter(gm).run(input) torch.testing.assert_close(result, torch.neg(input).sigmoid())

Parameters:

Note

Backwards-compatibility for this API is guaranteed.

call_function(target, args, kwargs)[source]

Execute a call_function node and return the result.

Parameters:

Return type:

Any

Return

Any: The value returned by the function invocation

Note

Backwards-compatibility for this API is guaranteed.

call_method(target, args, kwargs)[source]

Execute a call_method node and return the result.

Parameters:

Return type:

Any

Return

Any: The value returned by the method invocation

Note

Backwards-compatibility for this API is guaranteed.

call_module(target, args, kwargs)[source]

Execute a call_module node and return the result.

Parameters:

Return type:

Any

Return

Any: The value returned by the module invocation

Note

Backwards-compatibility for this API is guaranteed.

fetch_args_kwargs_from_env(n)[source]

Fetch the concrete values of args and kwargs of node nfrom the current execution environment.

Parameters:

n (Node) – The node for which args and kwargs should be fetched.

Returns:

args and kwargs with concrete values for n.

Return type:

Tuple[Tuple, Dict]

Note

Backwards-compatibility for this API is guaranteed.

fetch_attr(target)[source]

Fetch an attribute from the Module hierarchy of self.module.

Parameters:

target (str) – The fully-qualified name of the attribute to fetch

Returns:

The value of the attribute.

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

get_attr(target, args, kwargs)[source]

Execute a get_attr node. Will retrieve an attribute value from the Module hierarchy of self.module.

Parameters:

Returns:

The value of the attribute that was retrieved

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

map_nodes_to_values(args, n)[source]

Recursively descend through args and look up the concrete value for each Node in the current execution environment.

Parameters:

Return type:

Optional[Union[Tuple[Any, …], List[Any], Dict[str, Any], slice, range, Node, str, int, float, bool, complex, dtype, Tensor, device, memory_format, layout]]

Note

Backwards-compatibility for this API is guaranteed.

output(target, args, kwargs)[source]

Execute an output node. This really just retrieves the value referenced by the output node and returns it.

Parameters:

Returns:

The return value referenced by the output node

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

placeholder(target, args, kwargs)[source]

Execute a placeholder node. Note that this is stateful:Interpreter maintains an internal iterator over arguments passed to run and this method returns next() on that iterator.

Parameters:

Returns:

The argument value that was retrieved.

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

run(*args, initial_env=None, enable_io_processing=True)[source]

Run module via interpretation and return the result.

Parameters:

Returns:

The value returned from executing the Module

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

run_node(n)[source]

Run a specific node n and return the result. Calls into placeholder, get_attr, call_function, call_method, call_module, or output depending on node.op

Parameters:

n (Node) – The Node to execute

Returns:

The result of executing n

Return type:

Any

Note

Backwards-compatibility for this API is guaranteed.

class torch.fx.Transformer(module)[source]

Transformer is a special type of interpreter that produces a new Module. It exposes a transform() method that returns the transformed Module. Transformer does not require arguments to run, as Interpreter does. Transformer works entirely symbolically.

Example

Suppose we want to swap all instances of torch.neg withtorch.sigmoid and vice versa (including their Tensormethod equivalents). We could subclass Transformer like so:

class NegSigmSwapXformer(Transformer): def call_function(self, target : 'Target', args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any: if target == torch.sigmoid: return torch.neg(*args, **kwargs) return super().call_function(n)

def call_method(self, target : 'Target', args : Tuple[Argument, ...], kwargs : Dict[str, Any]) -> Any:
    if target == 'neg':
        call_self, *args_tail = args
        return call_self.sigmoid(*args_tail, **kwargs)
    return super().call_method(n)

def fn(x): return torch.sigmoid(x).neg()

gm = torch.fx.symbolic_trace(fn)

transformed : torch.nn.Module = NegSigmSwapXformer(gm).transform() input = torch.randn(3, 4) torch.testing.assert_close(transformed(input), torch.neg(input).sigmoid())

Parameters:

module (GraphModule) – The Module to be transformed.

Note

Backwards-compatibility for this API is guaranteed.

call_function(target, args, kwargs)[source]

Note

Backwards-compatibility for this API is guaranteed.

Return type:

Any

call_module(target, args, kwargs)[source]

Note

Backwards-compatibility for this API is guaranteed.

Return type:

Any

get_attr(target, args, kwargs)[source]

Execute a get_attr node. In Transformer, this is overridden to insert a new get_attr node into the output graph.

Parameters:

Return type:

Proxy

Note

Backwards-compatibility for this API is guaranteed.

placeholder(target, args, kwargs)[source]

Execute a placeholder node. In Transformer, this is overridden to insert a new placeholder into the output graph.

Parameters:

Return type:

Proxy

Note

Backwards-compatibility for this API is guaranteed.

transform()[source]

Transform self.module and return the transformedGraphModule.

Note

Backwards-compatibility for this API is guaranteed.

Return type:

GraphModule

torch.fx.replace_pattern(gm, pattern, replacement)[source]

Matches all possible non-overlapping sets of operators and their data dependencies (pattern) in the Graph of a GraphModule (gm), then replaces each of these matched subgraphs with another subgraph (replacement).

Parameters:

Returns:

A list of Match objects representing the places in the original graph that pattern was matched to. The list is empty if there are no matches. Match is defined as:

class Match(NamedTuple): # Node from which the match was found anchor: Node # Maps nodes in the pattern subgraph to nodes in the larger graph nodes_map: Dict[Node, Node]

Return type:

List[Match]

Examples:

import torch from torch.fx import symbolic_trace, subgraph_rewriter

class M(torch.nn.Module): def init(self): super().init()

def forward(self, x, w1, w2):
    m1 = torch.cat([w1, w2]).sum()
    m2 = torch.cat([w1, w2]).sum()
    return x + torch.max(m1) + torch.max(m2)

def pattern(w1, w2): return torch.cat([w1, w2]).sum()

def replacement(w1, w2): return torch.stack([w1, w2])

traced_module = symbolic_trace(M())

subgraph_rewriter.replace_pattern(traced_module, pattern, replacement)

The above code will first match pattern in the forwardmethod of traced_module. Pattern-matching is done based on use-def relationships, not node names. For example, if you hadp = torch.cat([a, b]) in pattern, you could matchm = torch.cat([a, b]) in the original forward function, despite the variable names being different (p vs m).

The return statement in pattern is matched based on its value only; it may or may not match to the return statement in the larger graph. In other words, the pattern doesn’t have to extend to the end of the larger graph.

When the pattern is matched, it will be removed from the larger function and replaced by replacement. If there are multiple matches for pattern in the larger function, each non-overlapping match will be replaced. In the case of a match overlap, the first found match in the set of overlapping matches will be replaced. (“First” here being defined as the first in a topological ordering of the Nodes’ use-def relationships. In most cases, the first Node is the parameter that appears directly after self, while the last Node is whatever the function returns.)

One important thing to note is that the parameters of thepattern Callable must be used in the Callable itself, and the parameters of the replacement Callable must match the pattern. The first rule is why, in the above code block, theforward function has parameters x, w1, w2, but thepattern function only has parameters w1, w2. patterndoesn’t use x, so it shouldn’t specify x as a parameter. As an example of the second rule, consider replacing

def pattern(x, y): return torch.neg(x) + torch.relu(y)

with

def replacement(x, y): return torch.relu(x)

In this case, replacement needs the same number of parameters as pattern (both x and y), even though the parametery isn’t used in replacement.

After calling subgraph_rewriter.replace_pattern, the generated Python code looks like this:

def forward(self, x, w1, w2): stack_1 = torch.stack([w1, w2]) sum_1 = stack_1.sum() stack_2 = torch.stack([w1, w2]) sum_2 = stack_2.sum() max_1 = torch.max(sum_1) add_1 = x + max_1 max_2 = torch.max(sum_2) add_2 = add_1 + max_2 return add_2

Note

Backwards-compatibility for this API is guaranteed.