Extending dispatcher for a new backend in C++ — PyTorch Tutorials 2.7.0+cu126 documentation (original) (raw)

advanced/extend_dispatcher

Run in Google Colab

Colab

Download Notebook

Notebook

View on GitHub

GitHub

Created On: Feb 01, 2021 | Last Updated: Sep 23, 2024 | Last Verified: Nov 05, 2024

In this tutorial we will walk through all necessary steps to extend the dispatcher to add a new device living outside pytorch/pytorch repo and maintain it to keep in sync with native PyTorch devices. Here we’ll assume that you’re familiar with how to register a dispatched operator in C++ and how to write acustom autograd function.

Note

This tutorial touches a lot of internal components inside PyTorch which are being actively improved, please expect changes to APIs if you decide to follow this tutorial. We’ll keep this tutorial up to date with the latest APIs.

What’s a new backend?

Adding a new backend to PyTorch requires a lot of development and maintenance from backend extenders. Before adding a new backend, let’s first consider a few common use cases and recommended solutions for them:

In this tutorial we’ll mainly focus on adding a new out-of-tree device below. Adding out-of-tree support for a different tensor layout might share many common steps with devices, but we haven’t seen an example of such integrations yet so it might require additional work from PyTorch to support it.

Get a dispatch key for your backend

PyTorch operators are implemented in C++ and made available in Python frontend through Python bindings. The PyTorch dispatcher divides the implementation of an operator into multiple kernels, each of which is associated with a specific dispatch key. Supporting a new backend in PyTorch essentially means writing a kernel for each PyTorch operator in C++ and then registering them to a dispatch key representing your customized backend in the dispatcher.

Dispatch key is your identifier in the dispatcher system. The dispatcher looks at the dispatch keys carried on input tensors and calls the right kernel accordingly. PyTorch provides three reserved dispatch keys (and their corresponding Autograd keys) for prototyping out-of-tree backend extensions:

You can choose any of keys above to prototype your customized backend. To create a Tensor on PrivateUse1 backend, you need to set dispatch key in TensorImpl constructor.

/* Example TensorImpl constructor */ TensorImpl( Storage&& storage, DispatchKeySet ks, const caffe2::TypeMeta data_type);

// To create a TensorImpl on PrivateUse1 backend, pass in the following ks to TensorImpl creation. DispatchKeySet ks = c10::DispatchKeySet{c10::DispatchKey::PrivateUse1, c10::DispatchKey::AutogradPrivateUse1};

Note that TensorImpl class above assumes your Tensor is backed by a storage like CPU/CUDA. We also provide OpaqueTensorImpl for backends without a storage. And you might need to tweak/override certain methods to fit your customized hardware. One example in pytorch repo is Vulkan TensorImpl.

Note

Once the prototype is done and you plan to do regular releases for your backend extension, please feel free to submit a PR to pytorch/pytorch to reserve a dedicated dispatch key for your backend.

Get the full list of PyTorch operators

PyTorch provides a full list of extensible C++ operators in generated filebuild/aten/src/ATen/RegistrationDeclarations.h. This file is only available after building PyTorch from source. Here’s a snippet of the file:

Tensor abs(const Tensor & self); // {"schema": "aten::abs(Tensor self) -> Tensor", "dispatch": "True", "default": "True"} Tensor & abs_(Tensor & self); // {"schema": "aten::abs_(Tensor(a!) self) -> Tensor(a!)", "dispatch": "True", "default": "True"} Tensor & abs_out(Tensor & out, const Tensor & self); // {"schema": "aten::abs.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)", "dispatch": "True", "default": "False"} Tensor absolute(const Tensor & self); // {"schema": "aten::absolute(Tensor self) -> Tensor", "dispatch": "False", "default": "False"} Tensor & absolute_(Tensor & self); // {"schema": "aten::absolute_(Tensor(a!) self) -> Tensor(a!)", "dispatch": "False", "default": "False"} Tensor & absolute_out(Tensor & out, const Tensor & self); // {"schema": "aten::absolute.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)", "dispatch": "False", "default": "False"} Tensor angle(const Tensor & self); // {"schema": "aten::angle(Tensor self) -> Tensor", "dispatch": "True", "default": "True"} Tensor & angle_out(Tensor & out, const Tensor & self); // {"schema": "aten::angle.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)", "dispatch": "True", "default": "False"} Tensor sgn(const Tensor & self); // {"schema": "aten::sgn(Tensor self) -> Tensor", "dispatch": "True", "default": "True"}

There’re multiple fields associated with a single operator. Let’s break it down using abs_out as an example:

Register kernels for the new backend

To register your kernels to PyTorch dispatcher, you can use theTORCH_LIBRARY_IMPL API described inRegistering a Dispatched Operator in C++:

TORCH_LIBRARY_IMPL(aten, PrivateUse1, m) { m.impl(, &my_op1); m.impl(, &my_op2); m.impl(, &my_op2_backward); }

Now let’s zoom in and what operator requires a kernel from a customized backend and what’s inside the kernels exactly.

PyTorch currently has more than 1600 operators and it’s still growing. It’s unrealistic for backend extensions to keep up with this speed. Even for native backends like CPU or CUDA, it often requires a lot of work to write dedicated kernels for every new op.

Fortunately, some native PyTorch kernels are written in a way that they decompose to combination of several known operators. In other words, you only need to implement a set of known operators (ops that require registration below) instead of all PyTorch operators.

PyTorch operators can be classified into two categories:

Autograd support for the new backend

Gradient formulas are mostly purely mathematical and thus are general for all backends. PyTorch often registers a kernel to alias dispatch key Autograd, which means it can be used by all backends.

For these operators you don’t have to worry about their derivative formulas, you can just write forward definitions for operators in RegistrationDeclarations.h and PyTorch handles backward for you automatically.

Tensor my_op1(const Tensor& self, const Tensor& other) { // call your backend-specific APIs to implement my_op so that // it matches PyTorch's native behavior } TORCH_LIBRARY_IMPL(aten, PrivateUse1, m) { m.impl(, &my_op); }

In some cases, PyTorch backward kernel implementations are also device specific so that they can squeeze out max performance out of each backend. For those operators you’ll see op_backward showing up inRegistrationDeclarations.h as required registration as well.

Tensor my_op2_backward(const Tensor& self, const Tensor& other) { // call your backend-specific APIs to implement my_op2_backward so that // it matches PyTorch's native behavior }

// Note backward kernel is still registered to PrivateUse1 instead of AutogradPrivateUse1. // PyTorch will wrap your backward kernel with proper autograd setup and then link to it in // my_op2's AutogradPrivateUse1 kernel. TORCH_LIBRARY_IMPL(aten, PrivateUse1, m) { m.impl(, &my_op2); m.impl(, &my_op2_backward); }

In a few rare cases, PyTorch’s gradient formula for certain operators may have assumptions that don’t generalize for all backends. In those cases backend extenders can optionally override PyTorch Autograd layer by registering a kernel from torch::autograd::Function to the corresponding dispatch key (for example, AutogradPrivateUse1 if you’re using PrivateUse1 for your backend):

class MyAddFunction : public torch::autograd::Function { public: static Tensor forward(AutogradContext *ctx, torch::Tensor self, torch::Tensor other) { at::AutoNonVariableTypeMode g; return myadd(self, other); }

static tensor_list backward(AutogradContext *ctx, tensor_list grad_outputs) { auto grad_output = grad_outputs[0]; return {grad_output, grad_output}; } };

Tensor myadd_autograd(const Tensor& self, const Tensor& other) { return MyAddFunction::apply(self, other)[0]; }

// Register the autograd kernel to AutogradPrivateUse1 TORCH_LIBRARY_IMPL(aten, AutogradPrivateUse1, m) { m.impl(, &myadd_autograd); }

// Register the inference kernel to PrivateUse1 TORCH_LIBRARY_IMPL(aten, PrivateUse1, m) { m.impl(, &myadd); }

With this trick you have full control over both training and inference behavior for my_add operator in your backend. Here’s an example in the pytorch/xla repository.

Build an extension

Out-of-tree backend is supported by adding a C++ extension to PyTorch. Once you have kernels and registrations ready, you can build a C++ extension by writing a setup.py script that uses setuptools to compile C++ code. Here’s a simplified example frompytorch/xla repo:

from setuptools import setup from torch.utils.cpp_extension import BuildExtension, CppExtension

setup( name='torch_xla', ext_modules=[ CppExtension( '_XLAC', torch_xla_sources, include_dirs=include_dirs, extra_compile_args=extra_compile_args, library_dirs=library_dirs, extra_link_args=extra_link_args +
[make_relative_rpath('torch_xla/lib')], ), ], cmdclass={ 'build_ext': Build, # Build is a derived class of BuildExtension } # more configs... )

See our C++ extension tutorialfor more details.

JIT support

As we mentioned in Registering a Dispatched Operator in C++, kernels registered through m.impl() API support being called in both unboxed and boxed ways. In other words your customized backend can also work with our JIT tracing/scripting frontend just like the in-tree backends like CPU or CUDA do. You could potentially also write specialized optimization passes for your backend on a JIT graph. But we will not discuss it here since we haven’t finalized the integration point in JIT, so the current backend support will focus on the eager frontend for now.

Testing your backend against native PyTorch backends

PyTorch lets tests run on multiple device types using its generic device type testing framework. You can find details about how tests use itand information about how to add a new device type. Once added, PyTorch tests using the generic device type testing framework will be run using your device type, too. See this Wiki page for an example of how tests are instantiated.

Running PyTorch’s existing test suites with your device type is important to ensure correctness, but not all PyTorch features are supported by every device type. The generic device type testing framework allows for considerable customization so that device types can select which tests to run, which dtypes they support, and even which precisions to use when comparing tensors for equality.

An example device type that uses the generic device type testing framework and doesn’t ship with PyTorch is XLA. See its extension of the generic device type testing framework, which contains examples of block listing tests, block listing dtypes, and overriding test precision.

The generic device type testing framework is actively developed. To request a feature please file an issue on PyTorch’s Github.

Backward Compatibility

Currently PyTorch can’t guarantee backward compatibility for registered operators. Operators, as well as their schemas, might be added/modified/deleted as needed. Registered kernels must be exactly the same as PyTorch version. If PyTorch adds more parameters ( even with defaults) for an operator, your old registration won’t work until it’s updated to match PyTorch’s new signature.

As a result, we highly recommend out-of-tree backend extenders only sync with major PyTorch releases to minimize interruptions in development. PyTorch is on a quarterly release cadence. Backend extenders should join the #announcement channel at pytorch.slack.comto get latest updates on releases.

Known issues & additional notes

Future Work

Making every component in PyTorch extensible for an out-of-tree backend seamless requires a lot of changes to PyTorch internals. Here are a few items that we’re actively working on might improve the experience in the future:

Stay in touch

Please use PyTorch dev discussions for questions and discussions. If you have any feature requests or bug reports, please file an issue on github.

If you’re interested in helping in any of the future work items above (e.g adding more Mathkernels for PyTorch operators in C++), please reach out to us through Github or Slack!