fix: fix the inappropriate lowering pass of aten::to by bowang007 · Pull Request #1649 · pytorch/TensorRT (original) (raw)

Description

When there is a aten::to operation in the model as:

sizes = sizes.to(device=boxes.device)

like what happens here: https://github.com/facebookresearch/detectron2/blob/58e472e076a5d861fdcf773d9254a3664e045bf8/detectron2/modeling/poolers.py#L65, What this operation does is setting the device for sizes variable, since all the other parameters for aten::to op is not explicitly set, this results in an aten::to op in graph as:

 %sizes0.2 : Tensor = aten::to(%1954, %80, %80, %1955, %80, %6, %6, %80)

where:

%80 : NoneType = prim::Constant()

The NoneType is valid for aten::to candidate form:

aten::to.dtype_layout(Tensor(a) self, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None, bool non_blocking=False, bool copy=False, MemoryFormat? memory_format=None) -> Tensor(a)

but not valid for the other forms, and in our lowering pass we convert the form above into other forms in this line:

map_aten_dtype_layout.RegisterRewritePattern(to_dtype_layout_pattern, to_dtype_multi_input_pattern);

That's why this error message was printed :

RuntimeError: 0 INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":608, please report a bug to PyTorch. We don't have an op for aten::to but it isn't a special case.  Argument types: Tensor, Device, NoneType, bool, bool, NoneType, 

Fixes #1530

Type of change

Checklist: